Sample records for calculation methodology input

  1. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  2. 75 FR 39093 - Proposed Confidentiality Determinations for Data Required Under the Mandatory Greenhouse Gas...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-07

    ... information that is sensitive or proprietary, such as detailed process designs or site plans. Because the... Inputs to Emission Equations X Calculation Methodology and Methodological Tier X Data Elements Reported...

  3. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tom Elicson; Bentley Harwood; Jim Bouchard

    Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less

  5. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Methodology update for estimating volume to service flow ratio.

    DOT National Transportation Integrated Search

    2015-12-01

    Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...

  7. USGS Methodology for Assessing Continuous Petroleum Resources

    USGS Publications Warehouse

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  8. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    DOE PAGES

    Palmiotti, Giuseppe; Salvatores, Massimo

    2012-01-01

    The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  9. Calculation of Dynamic Loads Due to Random Vibration Environments in Rocket Engine Systems

    NASA Technical Reports Server (NTRS)

    Christensen, Eric R.; Brown, Andrew M.; Frady, Greg P.

    2007-01-01

    An important part of rocket engine design is the calculation of random dynamic loads resulting from internal engine "self-induced" sources. These loads are random in nature and can greatly influence the weight of many engine components. Several methodologies for calculating random loads are discussed and then compared to test results using a dynamic testbed consisting of a 60K thrust engine. The engine was tested in a free-free condition with known random force inputs from shakers attached to three locations near the main noise sources on the engine. Accelerations and strains were measured at several critical locations on the engines and then compared to the analytical results using two different random response methodologies.

  10. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  11. Transmutation Fuel Performance Code Thermal Model Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  12. Infiltration modeling guidelines for commercial building energy analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.

    This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less

  13. Development of a Pattern Recognition Methodology for Determining Operationally Optimal Heat Balance Instrumentation Calibration Schedules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Beran; John Christenson; Dragos Nica

    2002-12-15

    The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.

  14. IPAC-Inlet Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.

  15. Standardized input for Hanford environmental impact statements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.

    1981-05-01

    Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less

  16. Cost-benefit analysis in occupational health: a comparison of intervention scenarios for occupational asthma and rhinitis among bakery workers.

    PubMed

    Meijster, Tim; van Duuren-Stuurman, Birgit; Heederik, Dick; Houba, Remko; Koningsveld, Ernst; Warren, Nicholas; Tielemans, Erik

    2011-10-01

    Use of cost-benefit analysis in occupational health increases insight into the intervention strategy that maximises the cost-benefit ratio. This study presents a methodological framework identifying the most important elements of a cost-benefit analysis for occupational health settings. One of the main aims of the methodology is to evaluate cost-benefit ratios for different stakeholders (employers, employees and society). The developed methodology was applied to two intervention strategies focused on reducing respiratory diseases. A cost-benefit framework was developed and used to set up a calculation spreadsheet containing the inputs and algorithms required to calculate the costs and benefits for all cost elements. Inputs from a large variety of sources were used to calculate total costs, total benefits, net costs and the benefit-to-costs ratio for both intervention scenarios. Implementation of a covenant intervention program resulted in a net benefit of €16 848 546 over 20 years for a population of 10 000 workers. Implementation was cost-effective for all stakeholders. For a health surveillance scenario, total benefits resulting from a decreased disease burden were estimated to be €44 659 352. The costs of the interventions could not be calculated. This study provides important insights for developing effective intervention strategies in the field of occupational medicine. Use of a model based approach enables investigation of those parameters most likely to impact on the effectiveness and costs of interventions for work related diseases. Our case study highlights the importance of considering different perspectives (of employers, society and employees) in assessing and sharing the costs and benefits of interventions.

  17. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  18. Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico

    NASA Astrophysics Data System (ADS)

    Nathenson, M.; Fierstein, J.

    2012-12-01

    Reevaluation of tephra volumes for the 1982 eruption of El Chichón volcano, Mexico Manuel Nathenson and Judy Fierstein U.S. Geological Survey, 345 Middlefield Road MS-910, Menlo Park, CA 94025 In a recent numerical simulation of tephra transport and deposition for the 1982 eruption, Bonasia et al. (2012) used masses for the tephra layers (A-1, B, and C) based on the volume data of Carey and Sigurdsson (1986) calculated by the methodology of Rose et al. (1973). For reasons not clear, using the same methodology we obtained volumes for layers A-1 and B much less than those previously reported. For example, for layer A-1, Carey and Sigurdsson (1986) reported a volume of 0.60 km3, whereas we obtain a volume of 0.23 km3. Moreover, applying the more recent methodology of tephra-volume calculation (Pyle, 1989; Fierstein and Nathenson, 1992) and using the isopachs maps in Carey and Sigurdsson (1986), we calculate a total tephra volume of 0.52 km3 (A-1, 0.135; B, 0.125; and C, 0.26 km3). In contrast, Carey and Sigurdsson (1986) report a much larger total volume of 2.19 km3. Such disagreement not only reflects the differing methodologies, but we propose that the volumes calculated with the methodology of Pyle and of Fierstein and Nathenson—involving the use of straight lines on a log thickness versus square root of area plot—better represent the actual fall deposits. After measuring the areas for the isomass contours for the HAZMAPP and FALL3D simulations in Bonasia et al. (2012), we applied the Pyle-Fierstein and Nathenson methodology to calculate the tephra masses deposited on the ground. These masses from five of the simulations range from 70% to 110% of those reported by Carey and Sigurdsson (1986), whereas that for layer B in the HAZMAP calculation is 160%. In the Bonasia et al. (2012) study, the mass erupted by the volcano is a critical input used in the simulation to produce an ash cloud that deposits tephra on the ground. Masses on the ground (as calculated by us) for five of the simulations range from 20% to 46% of the masses used as simulation inputs, whereas that for layer B in the HAZMAP calculation is 74%. It is not clear why the percentages are so variable, nor why the output volumes are such small percentages of the input erupted mass. From our volume calculations, the masses on the ground from the simulations are factors of 2.3 to 10 times what was actually deposited. Given this finding from our reevaluation of volumes, the simulations appear to overestimate the hazards from eruptions of sizes that occurred at El Chichón. Bonasia, R., A. Costa, A. Folch, G. Macedonio, and L. Capra, (2012), Numerical simulation of tephra transport and deposition of the 1982 El Chichón eruption and implications for hazard assessment, J. Volc. Geotherm. Res., 231-232, 39-49. Carey, S. and H. Sigurdsson, (1986), The 1982 eruptions of El Chichon volcano, Mexico: Observations and numerical modelling of tephra-fall distribution, Bull. Volcanol., 48, 127-141. Fierstein, J., and M. Nathenson, (1992), Another look at the calculation of fallout tephra volumes, Bull. Volcanol., 54, 156-167. Pyle, D.M., (1989), The thickness, volume and grainsize of tephra fall deposits, Bull. Volcanol., 51, 1-15. Rose, W.I., Jr., S. Bonis, R.E. Stoiber, M. Keller, and T. Bickford, (1973), Studies of volcanic ash from two recent Central American eruptions, Bull. Volcanol., 37, 338-364.

  19. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  20. [CALCULATION OF THE PROBABILITY OF METALS INPUT INTO AN ORGANISM WITH DRINKING POTABLE WATERS].

    PubMed

    Tunakova, Yu A; Fayzullin, R I; Valiev, V S

    2015-01-01

    The work was performed in framework of the State program for the improvement of the competitiveness of Kazan (Volga) Federal University among the world's leading research and education centers and subsidies unveiled to Kazan Federal University to perform public tasks in the field of scientific research. In the current methodological recommendations "Guide for assessing the risk to public health under the influence of chemicals that pollute the environment," P 2.1.10.1920-04 there is regulated the determination of quantitative and/or qualitative characteristics of the harmful effects to human health from exposure to environmental factors. We proposed to complement the methodological approaches presented in P 2.1.10.1920-04, with the estimation of the probability of pollutants input in the body with drinking water which is the greater, the higher the order of the excess of the actual concentrations of the substances in comparison with background concentrations. In the paper there is proposed a method of calculation of the probability of exceeding the actual concentrations of metal cations above the background in samples of drinking water consumed by the population, which were selected at the end points of consumption in houses and apartments, to accommodate the passage of secondary pollution ofwater pipelines and distributing paths. Research was performed on the example of Kazan, divided into zones. The calculation of probabilities was made with the use of Bayes' theorem.

  1. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 3. Generator routines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.; Argo, R.S.

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. The various input parameters required in the analysis are compiled in data systems. The data are organized and preparedmore » by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System, a storage and retrieval system for model input and output data, including graphical interpretation and display is described. This is the third of four volumes of the description of the CIRMIS Data System.« less

  2. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 1. Initialization, operation, and documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. The various input parameters required in the analysis are compiled in data systems. The data are organized and preparedmore » by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System, a storage and retrieval system for model input and output data, including graphical interpretation and display is described. This is the first of four volumes of the description of the CIRMIS Data System.« less

  3. Electronic structure, dielectric response, and surface charge distribution of RGD (1FUV) peptide.

    PubMed

    Adhikari, Puja; Wen, Amy M; French, Roger H; Parsegian, V Adrian; Steinmetz, Nicole F; Podgornik, Rudolf; Ching, Wai-Yim

    2014-07-08

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor.

  4. Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.

    PubMed

    Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos

    2014-01-01

    Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.

  5. Methodology for modeling the devolatilization of refuse-derived fuel from thermogravimetric analysis of municipal solid waste components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsky, K.J.; Miller, D.L.; Cernansky, N.P.

    1994-09-01

    A methodology was introduced for modeling the devolatilization characteristics of refuse-derived fuel (RFD) in terms of temperature-dependent weight loss. The basic premise of the methodology is that RDF is modeled as a combination of select municipal solid waste (MSW) components. Kinetic parameters are derived for each component from thermogravimetric analyzer (TGA) data measured at a specific set of conditions. These experimentally derived parameters, along with user-derived parameters, are inputted to model equations for the purpose of calculating thermograms for the components. The component thermograms are summed to create a composite thermogram that is an estimate of the devolatilization for themore » as-modeled RFD. The methodology has several attractive features as a thermal analysis tool for waste fuels. 7 refs., 10 figs., 3 tabs.« less

  6. Revalidation studies of Mark 16 experiments: J70

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S.Y.

    1993-10-25

    The MGBS-TGAL combination of the J70 criticality modules was validated for Mark 16 lattices by H. K. Clark as reported in DPST-83-1025. Unfortunately, the records of the calculations reported can not be retrieved and the descriptions of the modeling used are not fully provided in DPST-83-1025. The report does not describe in detail how to model the experiments and how to set up the input. The computer output for the cases reported in the memorandum can not be located in files. The MGBS-TGAL calculations reported in DPST-83-1025 have been independently reperformed to provide retrievable record copies of the calculations, tomore » provide a detailed description and discussion of the methodology used, and to serve as a training exercise for a novice criticality safety engineer. The current results reproduce Clark`s reported results to within about 0.01% or better. A procedure to perform these and similar calculations is given in this report, with explanation of the methodology choices provided. Copies of the computer output have been made via microfiche and will be maintained in APG files.« less

  7. Pressure Ratio to Thermal Environments

    NASA Technical Reports Server (NTRS)

    Lopez, Pedro; Wang, Winston

    2012-01-01

    A pressure ratio to thermal environments (PRatTlE.pl) program is a Perl language code that estimates heating at requested body point locations by scaling the heating at a reference location times a pressure ratio factor. The pressure ratio factor is the ratio of the local pressure at the reference point and the requested point from CFD (computational fluid dynamics) solutions. This innovation provides pressure ratio-based thermal environments in an automated and traceable method. Previously, the pressure ratio methodology was implemented via a Microsoft Excel spreadsheet and macro scripts. PRatTlE is able to calculate heating environments for 150 body points in less than two minutes. PRatTlE is coded in Perl programming language, is command-line-driven, and has been successfully executed on both the HP and Linux platforms. It supports multiple concurrent runs. PRatTlE contains error trapping and input file format verification, which allows clear visibility into the input data structure and intermediate calculations.

  8. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  9. Probability calculations for three-part mineral resource assessments

    USGS Publications Warehouse

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  10. Analysis of the power flow in nonlinear oscillators driven by random excitation using the first Wiener kernel

    NASA Astrophysics Data System (ADS)

    Hawes, D. H.; Langley, R. S.

    2018-01-01

    Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.

  11. Global Impact Estimation of ISO 50001 Energy Management System for Industrial and Service Sectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghajanzadeh, Arian; Therkelsen, Peter L.; Rao, Prakash

    A methodology has been developed to determine the impacts of ISO 50001 Energy Management System (EnMS) at a region or country level. The impacts of ISO 50001 EnMS include energy, CO2 emissions, and cost savings. This internationally recognized and transparent methodology has been embodied in a user friendly Microsoft Excel® based tool called ISO 50001 Impact Estimator Tool (IET 50001). However, the tool inputs are critical in order to get accurate and defensible results. This report is intended to document the data sources used and assumptions made to calculate the global impact of ISO 50001 EnMS.

  12. Electronic Structure, Dielectric Response, and Surface Charge Distribution of RGD (1FUV) Peptide

    PubMed Central

    Adhikari, Puja; Wen, Amy M.; French, Roger H.; Parsegian, V. Adrian; Steinmetz, Nicole F.; Podgornik, Rudolf; Ching, Wai-Yim

    2014-01-01

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor. PMID:25001596

  13. Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments

    NASA Technical Reports Server (NTRS)

    Killough, Brian D.

    1990-01-01

    The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.

  14. The work environment disability-adjusted life year for use with life cycle assessment: a methodological approach.

    PubMed

    Scanlon, Kelly A; Gray, George M; Francis, Royce A; Lloyd, Shannon M; LaPuma, Peter

    2013-03-06

    Life cycle assessment (LCA) is a systems-based method used to determine potential impacts to the environment associated with a product throughout its life cycle. Conclusions from LCA studies can be applied to support decisions regarding product design or public policy, therefore, all relevant inputs (e.g., raw materials, energy) and outputs (e.g., emissions, waste) to the product system should be evaluated to estimate impacts. Currently, work-related impacts are not routinely considered in LCA. The objectives of this paper are: 1) introduce the work environment disability-adjusted life year (WE-DALY), one portion of a characterization factor used to express the magnitude of impacts to human health attributable to work-related exposures to workplace hazards; 2) outline the methods for calculating the WE-DALY; 3) demonstrate the calculation; and 4) highlight strengths and weaknesses of the methodological approach. The concept of the WE-DALY and the methodological approach to its calculation is grounded in the World Health Organization's disability-adjusted life year (DALY). Like the DALY, the WE-DALY equation considers the years of life lost due to premature mortality and the years of life lived with disability outcomes to estimate the total number of years of healthy life lost in a population. The equation requires input in the form of the number of fatal and nonfatal injuries and illnesses that occur in the industries relevant to the product system evaluated in the LCA study, the age of the worker at the time of the fatal or nonfatal injury or illness, the severity of the injury or illness, and the duration of time lived with the outcomes of the injury or illness. The methodological approach for the WE-DALY requires data from various sources, multi-step instructions to determine each variable used in the WE-DALY equation, and assumptions based on professional opinion. Results support the use of the WE-DALY in a characterization factor in LCA. Integrating occupational health into LCA studies will provide opportunities to prevent shifting of impacts between the work environment and the environment external to the workplace and co-optimize human health, to include worker health, and environmental health.

  15. Probability-based methodology for buckling investigation of sandwich composite shells with and without cut-outs

    NASA Astrophysics Data System (ADS)

    Alfano, M.; Bisagni, C.

    2017-01-01

    The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.

  16. National Hospital Input Price Index

    PubMed Central

    Freeland, Mark S.; Anderson, Gerard; Schendler, Carol Ellen

    1979-01-01

    The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 percent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies. PMID:10309052

  17. National hospital input price index.

    PubMed

    Freeland, M S; Anderson, G; Schendler, C E

    1979-01-01

    The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies.

  18. CIRMIS Data system. Volume 2. Program listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required.The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the second of four volumes of the description of the CIRMIS Data System.« less

  19. Assessment of effectiveness of geologic isolation systems. CIRMIS data system. Volume 4. Driller's logs, stratigraphic cross section and utility routines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedrichs, D.R.

    1980-01-01

    The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the fourth of four volumes of the description of the CIRMIS Data System.« less

  20. Modelling of Rail Vehicles and Track for Calculation of Ground-Vibration Transmission Into Buildings

    NASA Astrophysics Data System (ADS)

    Hunt, H. E. M.

    1996-05-01

    A methodology for the calculation of vibration transmission from railways into buildings is presented. The method permits existing models of railway vehicles and track to be incorporated and it has application to any model of vibration transmission through the ground. Special attention is paid to the relative phasing between adjacent axle-force inputs to the rail, so that vibration transmission may be calculated as a random process. The vehicle-track model is used in conjunction with a building model of infinite length. The tracking and building are infinite and parallel to each other and forces applied are statistically stationary in space so that vibration levels at any two points along the building are the same. The methodology is two-dimensional for the purpose of application of random process theory, but fully three-dimensional for calculation of vibration transmission from the track and through the ground into the foundations of the building. The computational efficiency of the method will interest engineers faced with the task of reducing vibration levels in buildings. It is possible to assess the relative merits of using rail pads, under-sleeper pads, ballast mats, floating-slab track or base isolation for particular applications.

  1. The cost of energy from utility-owned solar electric systems. A required revenue methodology for ERDA/EPRI evaluations

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This methodology calculates the electric energy busbar cost from a utility-owned solar electric system. This approach is applicable to both publicly- and privately-owned utilities. Busbar cost represents the minimum price per unit of energy consistent with producing system-resultant revenues equal to the sum of system-resultant costs. This equality is expressed in present value terms, where the discount rate used reflects the rate of return required on invested capital. Major input variables describe the output capabilities and capital cost of the energy system, the cash flows required for system operation amd maintenance, and the financial structure and tax environment of the utility.

  2. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-06-14

    Detection of manipulated supervisory control and data acquisition (SCADA) data is critically important for the safe and secure operation of modern power systems. In this paper, a methodology of detecting manipulated SCADA data based on state estimation residuals is presented. A framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection process. The BACON algorithm is applied to detect outliers in the state estimation residuals. The IEEE 118-bus system is used asmore » a test case to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  3. Assessment of Methodological Quality of Economic Evaluations in Belgian Drug Reimbursement Applications

    PubMed Central

    Simoens, Steven

    2013-01-01

    Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474

  4. Assessment of methodological quality of economic evaluations in belgian drug reimbursement applications.

    PubMed

    Simoens, Steven

    2013-01-01

    This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.

  5. Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression.

    PubMed

    Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C

    2013-12-21

    Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.

  6. Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.

    PubMed

    Neville, R S; Stonham, T J; Glover, R J

    2000-01-01

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).

  7. Application of hybrid life cycle approaches to emerging energy technologies--the case of wind power in the UK.

    PubMed

    Wiedmann, Thomas O; Suh, Sangwon; Feng, Kuishuang; Lenzen, Manfred; Acquaye, Adolf; Scott, Kate; Barrett, John R

    2011-07-01

    Future energy technologies will be key for a successful reduction of man-made greenhouse gas emissions. With demand for electricity projected to increase significantly in the future, climate policy goals of limiting the effects of global atmospheric warming can only be achieved if power generation processes are profoundly decarbonized. Energy models, however, have ignored the fact that upstream emissions are associated with any energy technology. In this work we explore methodological options for hybrid life cycle assessment (hybrid LCA) to account for the indirect greenhouse gas (GHG) emissions of energy technologies using wind power generation in the UK as a case study. We develop and compare two different approaches using a multiregion input-output modeling framework - Input-Output-based Hybrid LCA and Integrated Hybrid LCA. The latter utilizes the full-sized Ecoinvent process database. We discuss significance and reliability of the results and suggest ways to improve the accuracy of the calculations. The comparison of hybrid LCA methodologies provides valuable insight into the availability and robustness of approaches for informing energy and environmental policy.

  8. MCNP HPGe detector benchmark with previously validated Cyltran model.

    PubMed

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  9. A Cost Model for Testing Unmanned and Autonomous Systems of Systems

    DTIC Science & Technology

    2011-02-01

    those risks. In addition, the fundamental methods presented by Aranha and Borba to include the complexity and sizing of tests for UASoS, can be expanded...used as an input for test execution effort estimation models (Aranha & Borba , 2007). Such methodology is very relevant to this work because as a UASoS...calculate the test effort based on the complexity of the SoS. However, Aranha and Borba define test size as the number of steps required to complete

  10. Using State Estimation Residuals to Detect Abnormal SCADA Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Chen, Yousu; Huang, Zhenyu

    2010-04-30

    Detection of abnormal supervisory control and data acquisition (SCADA) data is critically important for safe and secure operation of modern power systems. In this paper, a methodology of abnormal SCADA data detection based on state estimation residuals is presented. Preceded with a brief overview of outlier detection methods and bad SCADA data detection for state estimation, the framework of the proposed methodology is described. Instead of using original SCADA measurements as the bad data sources, the residuals calculated based on the results of the state estimator are used as the input for the outlier detection algorithm. The BACON algorithm ismore » applied to the outlier detection task. The IEEE 118-bus system is used as a test base to evaluate the effectiveness of the proposed methodology. The accuracy of the BACON method is compared with that of the 3-σ method for the simulated SCADA measurements and residuals.« less

  11. Simulating the x-ray image contrast to setup techniques with desired flaw detectability

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing the detector resolution. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  12. Finite element calculation of residual stress in dental restorative material

    NASA Astrophysics Data System (ADS)

    Grassia, Luigi; D'Amore, Alberto

    2012-07-01

    A finite element methodology for residual stresses calculation in dental restorative materials is proposed. The material under concern is a multifunctional methacrylate-based composite for dental restorations, activated by visible light. Reaction kinetics, curing shrinkage, and viscoelastic relaxation functions were required as input data on a structural finite element solver. Post cure effects were considered in order to quantify the residual stresses coming out from natural contraction with respect to those debited to the chemical shrinkage. The analysis showed for a given test case that residual stresses frozen in the dental restoration at uniform temperature of 37°C are of the same order of magnitude of the strength of the dental composite material per se.

  13. Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development

    DTIC Science & Technology

    1986-10-01

    parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1

  14. Structural/aerodynamic Blade Analyzer (SAB) User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Morel, M. R.

    1994-01-01

    The structural/aerodynamic blade (SAB) analyzer provides an automated tool for the static-deflection analysis of turbomachinery blades with aerodynamic and rotational loads. A structural code calculates a deflected blade shape using aerodynamic loads input. An aerodynamic solver computes aerodynamic loads using deflected blade shape input. The two programs are iterated automatically until deflections converge. Currently, SAB version 1.0 is interfaced with MSC/NASTRAN to perform the structural analysis and PROP3D to perform the aerodynamic analysis. This document serves as a guide for the operation of the SAB system with specific emphasis on its use at NASA Lewis Research Center (LeRC). This guide consists of six chapters: an introduction which gives a summary of SAB; SAB's methodology, component files, links, and interfaces; input/output file structure; setup and execution of the SAB files on the Cray computers; hints and tips to advise the user; and an example problem demonstrating the SAB process. In addition, four appendices are presented to define the different computer programs used within the SAB analyzer and describe the required input decks.

  15. Automated calculation of surface energy fluxes with high-frequency lake buoy data

    USGS Publications Warehouse

    Woolway, R. Iestyn; Jones, Ian D; Hamilton, David P.; Maberly, Stephen C; Muroaka, Kohji; Read, Jordan S.; Smyth, Robyn L; Winslow, Luke A.

    2015-01-01

    Lake Heat Flux Analyzer is a program used for calculating the surface energy fluxes in lakes according to established literature methodologies. The program was developed in MATLAB for the rapid analysis of high-frequency data from instrumented lake buoys in support of the emerging field of aquatic sensor network science. To calculate the surface energy fluxes, the program requires a number of input variables, such as air and water temperature, relative humidity, wind speed, and short-wave radiation. Available outputs for Lake Heat Flux Analyzer include the surface fluxes of momentum, sensible heat and latent heat and their corresponding transfer coefficients, incoming and outgoing long-wave radiation. Lake Heat Flux Analyzer is open source and can be used to process data from multiple lakes rapidly. It provides a means of calculating the surface fluxes using a consistent method, thereby facilitating global comparisons of high-frequency data from lake buoys.

  16. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Riley, Cameron

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  17. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-11-21

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  18. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, Janine; Freestate, David; Hobbs, William

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results frommore » both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.« less

  19. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  20. The work environment disability-adjusted life year for use with life cycle assessment: a methodological approach

    PubMed Central

    2013-01-01

    Background Life cycle assessment (LCA) is a systems-based method used to determine potential impacts to the environment associated with a product throughout its life cycle. Conclusions from LCA studies can be applied to support decisions regarding product design or public policy, therefore, all relevant inputs (e.g., raw materials, energy) and outputs (e.g., emissions, waste) to the product system should be evaluated to estimate impacts. Currently, work-related impacts are not routinely considered in LCA. The objectives of this paper are: 1) introduce the work environment disability-adjusted life year (WE-DALY), one portion of a characterization factor used to express the magnitude of impacts to human health attributable to work-related exposures to workplace hazards; 2) outline the methods for calculating the WE-DALY; 3) demonstrate the calculation; and 4) highlight strengths and weaknesses of the methodological approach. Methods The concept of the WE-DALY and the methodological approach to its calculation is grounded in the World Health Organization’s disability-adjusted life year (DALY). Like the DALY, the WE-DALY equation considers the years of life lost due to premature mortality and the years of life lived with disability outcomes to estimate the total number of years of healthy life lost in a population. The equation requires input in the form of the number of fatal and nonfatal injuries and illnesses that occur in the industries relevant to the product system evaluated in the LCA study, the age of the worker at the time of the fatal or nonfatal injury or illness, the severity of the injury or illness, and the duration of time lived with the outcomes of the injury or illness. Results The methodological approach for the WE-DALY requires data from various sources, multi-step instructions to determine each variable used in the WE-DALY equation, and assumptions based on professional opinion. Conclusions Results support the use of the WE-DALY in a characterization factor in LCA. Integrating occupational health into LCA studies will provide opportunities to prevent shifting of impacts between the work environment and the environment external to the workplace and co-optimize human health, to include worker health, and environmental health. PMID:23497039

  1. Discrete element weld model, phase 2

    NASA Technical Reports Server (NTRS)

    Prakash, C.; Samonds, M.; Singhal, A. K.

    1987-01-01

    A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.

  2. Methodology for calculation of carbon balances for biofuel crops production

    NASA Astrophysics Data System (ADS)

    Gerlfand, I.; Hamilton, S. K.; Snapp, S. S.; Robertson, G. P.

    2012-04-01

    Understanding the carbon balance implications for different biofuel crop production systems is important for the development of decision making tools and policies. We present here a detailed methodology for assessing carbon balances in agricultural and natural ecosystems. We use 20 years of data from Long-term Ecological Research (LTER) experiments at the Kellogg Biological Station (KBS), combined with models to produce farm level CO2 balances for different management practices. We compared four grain and one forage systems in the U.S. Midwest: corn (Zea mays) - soybean (Glycine max) - wheat (Triticum aestivum) rotations managed with (1) conventional tillage, (2) no till, (3) low chemical input, and (4) biologically-based (organic) practices; and (5) continuous alfalfa (Medicago sativa). In addition we use an abandoned agricultural field (successionnal ecosystem) as reference system. Measurements include fluxes of N2O and CH4, soil organic carbon change, agricultural yields, and agricultural inputs (e.g. fertilization and farm fuel use). In addition to measurements, we model carbon offsets associated with the use of bioenergy from agriculturally produced crops. Our analysis shows the importance of establishing appropriate system boundaries for carbon balance calculations. We explore how different assumptions regarding production methods and emission factors affect overall conclusions on carbon balances of different agricultural systems. Our results show management practices that have major the most important effects on carbon balances. Overall, agricultural management with conventional tillage was found to be a net CO2 source to the atmosphere, while agricultural management under reduced tillage, low input, or organic management sequestered carbon at rates of 93, -23, -51, and -14 g CO2e m-2 yr-1, respectively for conventionally tilled, no-till, low-input, and organically managed ecosystems. Perennial systems (alfalfa and the successionnal fields) showed net carbon sequestration of -44 and -382 g CO2e m-2 yr-1, respectively. When studied systems were assumed to be used for bioenergy production, all system exhibited carbon sequestration -- between -149 and -841 g CO2e m-2 yr-1, for conventionally tilled and successionnal ecosystems, respectively.

  3. Operational Implementation of a Pc Uncertainty Construct for Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Newman, Lauri K.; Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    Earlier this year the NASA Conjunction Assessment and Risk Analysis (CARA) project presented the theoretical and algorithmic aspects of a method to include the uncertainties in the calculation inputs when computing the probability of collision (Pc) between two space objects, principally uncertainties in the covariances and the hard-body radius. The output of this calculation approach is to produce rather than a single Pc value an entire probability density function that will represent the range of possible Pc values given the uncertainties in the inputs and bring CA risk analysis methodologies more in line with modern risk management theory. The present study provides results from the exercise of this method against an extended dataset of satellite conjunctions in order to determine the effect of its use on the evaluation of conjunction assessment (CA) event risk posture. The effects are found to be considerable: a good number of events are downgraded from or upgraded to a serious risk designation on the basis of consideration of the Pc uncertainty. The findings counsel the integration of the developed methods into NASA CA operations.

  4. Dynamic calibration of a wheelchair dynamometer.

    PubMed

    DiGiovine, C P; Cooper, R A; Boninger, M L

    2001-01-01

    The inertia and resistance of a wheelchair dynamometer must be determined in order to compare the results of one study to another, independent of the type of device used. The purpose of this study was to describe and implement a dynamic calibration test for characterizing the electro-mechanical properties of a dynamometer. The inertia, the viscous friction, the kinetic friction, the motor back-electromotive force constant, and the motor constant were calculated using three different methods. The methodology based on a dynamic calibration test along with a nonlinear regression analysis produced the best results. The coefficient of determination comparing the dynamometer model output to the measured angular velocity and torque was 0.999 for a ramp input and 0.989 for a sinusoidal input. The inertia and resistance were determined for the rollers and the wheelchair wheels. The calculation of the electro-mechanical parameters allows for the complete description of the propulsive torque produced by an individual, given only the angular velocity and acceleration. The measurement of the electro-mechanical properties of the dynamometer as well as the wheelchair/human system provides the information necessary to simulate real-world conditions.

  5. a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Remondino, F.; Santana Quintero, M.

    2018-05-01

    This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.

  6. Application of Adjoint Methodology in Various Aspects of Sonic Boom Design

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Sriram K.

    2014-01-01

    One of the advances in computational design has been the development of adjoint methods allowing efficient calculation of sensitivities in gradient-based shape optimization. This paper discusses two new applications of adjoint methodology that have been developed to aid in sonic boom mitigation exercises. In the first, equivalent area targets are generated using adjoint sensitivities of selected boom metrics. These targets may then be used to drive the vehicle shape during optimization. The second application is the computation of adjoint sensitivities of boom metrics on the ground with respect to parameters such as flight conditions, propagation sampling rate, and selected inputs to the propagation algorithms. These sensitivities enable the designer to make more informed selections of flight conditions at which the chosen cost functionals are less sensitive.

  7. User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles

    NASA Technical Reports Server (NTRS)

    Fuller, L. C.; Stovall, T. K.

    1979-01-01

    Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.

  8. Application of Steinberg vibration fatigue model for structural verification of space instruments

    NASA Astrophysics Data System (ADS)

    García, Andrés; Sorribes-Palmer, Félix; Alonso, Gustavo

    2018-01-01

    Electronic components in spaceships are subjected to vibration loads during the ascent phase of the launcher. It is important to verify by tests and analysis that all parts can survive in the most severe load cases. The purpose of this paper is to present the methodology and results of the application of the Steinberg's fatigue model to estimate the life of electronic components of the EPT-HET instrument for the Solar Orbiter space mission. A Nastran finite element model (FEM) of the EPT-HET instrument was created and used for the structural analysis. The methodology is based on the use of the FEM of the entire instrument to calculate the relative displacement RDSD and RMS values of the PCBs from random vibration analysis. These values are used to estimate the fatigue life of the most susceptible electronic components with the Steinberg's fatigue damage equation and the Miner's cumulative fatigue index. The estimations are calculated for two different configurations of the instrument and three different inputs in order to support the redesign process. Finally, these analytical results are contrasted with the inspections and the functional tests made after the vibration tests, concluding that this methodology can adequately predict the fatigue damage or survival of the electronic components.

  9. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? A Case Study on Electric Cars.

    PubMed

    Font Vivanco, David; Tukker, Arnold; Kemp, René

    2016-10-18

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes in demand, however, choices related to modeling the environmental burdens from such changes have received less attention. In this study, we analyze choices in the environmental assessment methods (life cycle assessment (LCA) and hybrid LCA) and environmental input-output databases (E3IOT, Exiobase and WIOD) used as a source of bias. The analysis is done for a case study on battery electric and hydrogen cars in Europe. The results describe moderate rebound effects for both technologies in the short term. Additionally, long-run scenarios are calculated by simulating the total cost of ownership, which describe notable rebound effect sizes-from 26 to 59% and from 18 to 28%, respectively, depending on the methodological choices-with favorable economic conditions. Relevant sources of bias are found to be related to incomplete background systems, technology assumptions and sectorial aggregation. These findings highlight the importance of the method setup and of sensitivity analyses of choices related to environmental modeling in rebound effect assessments.

  10. A Summary of Revisions Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.; Perry, Boyd, III; Silva, Walter A.; Newman, Brett

    2014-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees-of - freedom and allows for the calculation of various airplane responses due to a discrete one-minus- cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and additional output data so as to provide a more useful and precise tool for gust load analysis. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs is included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  11. Simulating the X-Ray Image Contrast to Set-Up Techniques with Desired Flaw Detectability

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2015-01-01

    The paper provides simulation data of previous work by the author in developing a model for estimating detectability of crack-like flaws in radiography. The methodology is being developed to help in implementation of NASA Special x-ray radiography qualification, but is generically applicable to radiography. The paper describes a method for characterizing X-ray detector resolution for crack detection. Applicability of ASTM E 2737 resolution requirements to the model are also discussed. The paper describes a model for simulating the detector resolution. A computer calculator application, discussed here, also performs predicted contrast and signal-to-noise ratio calculations. Results of various simulation runs in calculating x-ray flaw size parameter and image contrast for varying input parameters such as crack depth, crack width, part thickness, x-ray angle, part-to-detector distance, part-to-source distance, source sizes, and detector sensitivity and resolution are given as 3D surfaces. These results demonstrate effect of the input parameters on the flaw size parameter and the simulated image contrast of the crack. These simulations demonstrate utility of the flaw size parameter model in setting up x-ray techniques that provide desired flaw detectability in radiography. The method is applicable to film radiography, computed radiography, and digital radiography.

  12. Axisymmetric computational fluid dynamics analysis of Saturn V/S1-C/F1 nozzle and plume

    NASA Technical Reports Server (NTRS)

    Ruf, Joseph H.

    1993-01-01

    An axisymmetric single engine Computational Fluid Dynamics calculation of the Saturn V/S 1-C vehicle base region and F1 engine plume is described. There were two objectives of this work, the first was to calculate an axisymmetric approximation of the nozzle, plume and base region flow fields of S1-C/F1, relate/scale this to flight data and apply this scaling factor to a NLS/STME axisymmetric calculations from a parallel effort. The second was to assess the differences in F1 and STME plume shear layer development and concentration of combustible gases. This second piece of information was to be input/supporting data for assumptions made in NLS2 base temperature scaling methodology from which the vehicle base thermal environments were being generated. The F1 calculations started at the main combustion chamber faceplate and incorporated the turbine exhaust dump/nozzle film coolant. The plume and base region calculations were made for ten thousand feet and 57 thousand feet altitude at vehicle flight velocity and in stagnant freestream. FDNS was implemented with a 14 species, 28 reaction finite rate chemistry model plus a soot burning model for the RP-1/LOX chemistry. Nozzle and plume flow fields are shown, the plume shear layer constituents are compared to a STME plume. Conclusions are made about the validity and status of the analysis and NLS2 vehicle base thermal environment definition methodology.

  13. Image-derived input function with factor analysis and a-priori information.

    PubMed

    Simončič, Urban; Zanotti-Fregonara, Paolo

    2015-02-01

    Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.

  14. Theoretical calculating the thermodynamic properties of solid sorbents for CO{sub 2} capture applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Yuhua

    2012-11-02

    Since current technologies for capturing CO{sub 2} to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO{sub 2} reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO{sub 2} capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO{sub 2} sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculatedmore » thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO{sub 2} adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO{sub 2} capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO{sub 2} sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO{sub 2} capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we first introduce our screening methodology and the results on a testing set of solids with known thermodynamic properties to validate our methodology. Then, by applying our computational method to several different kinds of solid systems, we demonstrate that our methodology can predict the useful information to help developing CO{sub 2} capture Technologies.« less

  15. Methodology for processing pressure traces used as inputs for combustion analyses in diesel engines

    NASA Astrophysics Data System (ADS)

    Rašić, Davor; Vihar, Rok; Žvar Baškovič, Urban; Katrašnik, Tomaž

    2017-05-01

    This study proposes a novel methodology for designing an optimum equiripple finite impulse response (FIR) filter for processing in-cylinder pressure traces of a diesel internal combustion engine, which serve as inputs for high-precision combustion analyses. The proposed automated workflow is based on an innovative approach of determining the transition band frequencies and optimum filter order. The methodology is based on discrete Fourier transform analysis, which is the first step to estimate the location of the pass-band and stop-band frequencies. The second step uses short-time Fourier transform analysis to refine the estimated aforementioned frequencies. These pass-band and stop-band frequencies are further used to determine the most appropriate FIR filter order. The most widely used existing methods for estimating the FIR filter order are not effective in suppressing the oscillations in the rate- of-heat-release (ROHR) trace, thus hindering the accuracy of combustion analyses. To address this problem, an innovative method for determining the order of an FIR filter is proposed in this study. This method is based on the minimization of the integral of normalized signal-to-noise differences between the stop-band frequency and the Nyquist frequency. Developed filters were validated using spectral analysis and calculation of the ROHR. The validation results showed that the filters designed using the proposed innovative method were superior compared with those using the existing methods for all analyzed cases. Highlights • Pressure traces of a diesel engine were processed by finite impulse response (FIR) filters with different orders • Transition band frequencies were determined with an innovative method based on discrete Fourier transform and short-time Fourier transform • Spectral analyses showed deficiencies of existing methods in determining the FIR filter order • A new method of determining the FIR filter order for processing pressure traces was proposed • The efficiency of the new method was demonstrated by spectral analyses and calculations of rate-of-heat-release traces

  16. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  17. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  18. VERA and VERA-EDU 3.5 Release Notes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sieger, Matt; Salko, Robert K.; Kochunas, Brendan M.

    The Virtual Environment for Reactor Applications components included in this distribution include selected computational tools and supporting infrastructure that solve neutronics, thermal-hydraulics, fuel performance, and coupled neutronics-thermal hydraulics problems. The infrastructure components provide a simplified common user input capability and provide for the physics integration with data transfer and coupled-physics iterative solution algorithms. Neutronics analysis can be performed for 2D lattices, 2D core and 3D core problems for pressurized water reactor geometries that can be used to calculate criticality and fission rate distributions by pin for input fuel compositions. MPACT uses the Method of Characteristics transport approach for 2D problems.more » For 3D problems, MPACT uses the 2D/1D method which uses 2D MOC in a radial plane and diffusion or SPn in the axial direction. MPACT includes integrated cross section capabilities that provide problem-specific cross sections generated using the subgroup methodology. The code can be executed both 2D and 3D problems in parallel to reduce overall run time. A thermal-hydraulics capability is provided with CTF (an updated version of COBRA-TF) that allows thermal-hydraulics analyses for single and multiple assemblies using the simplified VERA common input. This distribution also includes coupled neutronics/thermal-hydraulics capabilities to allow calculations using MPACT coupled with CTF. The VERA fuel rod performance component BISON calculates, on a 2D or 3D basis, fuel rod temperature, fuel rod internal pressure, free gas volume, clad integrity and fuel rod waterside diameter. These capabilities allow simulation of power cycling, fuel conditioning and deconditioning, high burnup performance, power uprate scoping studies, and accident performance. Input/Output capabilities include the VERA Common Input (VERAIn) script which converts the ASCII common input file to the intermediate XML used to drive all of the physics codes in the VERA Core Simulator (VERA-CS). VERA component codes either input the VERA XML format directly, or provide a preprocessor which can convert the XML into native input. VERAView is an interactive graphical interface for the visualization and engineering analyses of output data from VERA. The python-based software is easy to install and intuitive to use, and provides instantaneous 2D and 3D images, 1D plots, and alpha-numeric data from VERA multi-physics simulations. Testing within CASL has focused primarily on Westinghouse four-loop reactor geometries and conditions with example problems included in the distribution.« less

  19. Users manual for the NASA Lewis three-dimensional ice accretion code (LEWICE 3D)

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Potapczuk, Mark G.

    1993-01-01

    A description of the methodology, the algorithms, and the input and output data along with an example case for the NASA Lewis 3D ice accretion code (LEWICE3D) has been produced. The manual has been designed to help the user understand the capabilities, the methodologies, and the use of the code. The LEWICE3D code is a conglomeration of several codes for the purpose of calculating ice shapes on three-dimensional external surfaces. A three-dimensional external flow panel code is incorporated which has the capability of calculating flow about arbitrary 3D lifting and nonlifting bodies with external flow. A fourth order Runge-Kutta integration scheme is used to calculate arbitrary streamlines. An Adams type predictor-corrector trajectory integration scheme has been included to calculate arbitrary trajectories. Schemes for calculating tangent trajectories, collection efficiencies, and concentration factors for arbitrary regions of interest for single droplets or droplet distributions have been incorporated. A LEWICE 2D based heat transfer algorithm can be used to calculate ice accretions along surface streamlines. A geometry modification scheme is incorporated which calculates the new geometry based on the ice accretions generated at each section of interest. The three-dimensional ice accretion calculation is based on the LEWICE 2D calculation. Both codes calculate the flow, pressure distribution, and collection efficiency distribution along surface streamlines. For both codes the heat transfer calculation is divided into two regions, one above the stagnation point and one below the stagnation point, and solved for each region assuming a flat plate with pressure distribution. Water is assumed to follow the surface streamlines, hence starting at the stagnation zone any water that is not frozen out at a control volume is assumed to run back into the next control volume. After the amount of frozen water at each control volume has been calculated the geometry is modified by adding the ice at each control volume in the surface normal direction.

  20. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  1. A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.

    2012-01-01

    The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.

  2. Probing Reliability of Transport Phenomena Based Heat Transfer and Fluid Flow Analysis in Autogeneous Fusion Welding Process

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2010-09-01

    The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.

  3. A methodology for the assessment of inhalation exposure to aluminium from antiperspirant sprays.

    PubMed

    Schwarz, Katharina; Pappa, Gerlinde; Miertsch, Heike; Scheel, Julia; Koch, Wolfgang

    2018-04-01

    Inhalative exposure can occur accidentally when using cosmetic spray products. Usually, a tiered approach is applied for exposure assessment, starting with rather conservative, simplistic calculation models that may be improved with measured data and more refined modelling. Here we report on an advanced methodology to mimic in-use conditions for antiperspirant spray products to provide a more accurate estimate of the amount of aluminium possibly inhaled and taken up systemically, thus contributing to the overall body burden. Four typical products were sprayed onto a skin surrogate in defined rooms. For aluminium, size-related aerosol release fractions, i.e. inhalable, thoracic and respirable, were determined by a mass balance method taking droplet maturation into account. These data were included into a simple two-box exposure model, allowing calculation of the inhaled aluminium dose over 12 min. Systemic exposure doses were calculated for exposure of the deep lung and the upper respiratory tract using the Multiple Path Particle Deposition Model (MPPD) model. The total systemically available dose of aluminium was in all cases found to be less than 0.5 µg per application. With this study it could be demonstrated that refinement of the input data of the two-box exposure model with measured data of released airborne aluminium is a valuable approach to analyse the contribution of antiperspirant spray inhalation to total aluminium exposure as part of the overall risk assessment. We suggest the methodology which can also be applied to other exposure modelling approaches for spray products, and further is adapted to other similar use scenarios.

  4. 40 CFR 75.83 - Calculation of Hg mass emissions and heat input rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... heat input rate. 75.83 Section 75.83 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Calculation of Hg mass emissions and heat input rate. The owner or operator shall calculate Hg mass emissions and heat input rate in accordance with the procedures in sections 9.1 through 9.3 of appendix F to...

  5. Calculation of Hazard Category 2/3 Threshold Quantities Using Contemporary Dosimetric Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, William C.

    The purpose of this report is to describe the methodology and selection of input data utilized to calculate updated Hazard Category 2 and Hazard Category 3 Threshold Quantities (TQs) using contemporary dosimetric information. The calculation of the updated TQs will be considered for use in the revision to the Department of Energy (DOE) Technical Standard (STD-) 1027-92 Change Notice (CN)-1, “Hazard Categorization and Accident Analysis Techniques for Compliance with DOE Order 5480.23, Nuclear Safety Analysis Reports.” The updated TQs documented in this report complement an effort previously undertaken by the National Nuclear Security Administration (NNSA), which in 2014 issued revisedmore » Supplemental Guidance documenting the calculation of updated TQs for approximately 100 radionuclides listed in DOE-STD-1027-92, CN-1. The calculations documented in this report complement the NNSA effort by expanding the set of radionuclides to more than 1,250 radionuclides with a published TQ. The development of this report was sponsored by the Department of Energy’s Office of Nuclear Safety (AU-30) within the Associate Under Secretary for Environment, Health, Safety, and Security organization.« less

  6. An Overview of Modifications Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.

    2013-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  7. Intensive Input in Language Acquisition.

    ERIC Educational Resources Information Center

    Trimino, Andy; Ferguson, Nancy

    This paper discusses the role of input as one of the universals in second language acquisition theory. Considerations include how language instructors can best organize and present input and when certain kinds of input are more important. A self-administered program evaluation exercise using relevant theoretical and methodological contributions…

  8. Approaches to Children’s Exposure Assessment: Case Study with Diethylhexylphthalate (DEHP)

    PubMed Central

    Ginsberg, Gary; Ginsberg, Justine; Foos, Brenda

    2016-01-01

    Children’s exposure assessment is a key input into epidemiology studies, risk assessment and source apportionment. The goals of this article are to describe a methodology for children’s exposure assessment that can be used for these purposes and to apply the methodology to source apportionment for the case study chemical, diethylhexylphthalate (DEHP). A key feature is the comparison of total (aggregate) exposure calculated via a pathways approach to that derived from a biomonitoring approach. The 4-step methodology and its results for DEHP are: (1) Prioritization of life stages and exposure pathways, with pregnancy, breast-fed infants, and toddlers the focus of the case study and pathways selected that are relevant to these groups; (2) Estimation of pathway-specific exposures by life stage wherein diet was found to be the largest contributor for pregnant women, breast milk and mouthing behavior for the nursing infant and diet, house dust, and mouthing for toddlers; (3) Comparison of aggregate exposure by pathways vs biomonitoring-based approaches wherein good concordance was found for toddlers and pregnant women providing confidence in the exposure assessment; (4) Source apportionment in which DEHP presence in foods, children’s products, consumer products and the built environment are discussed with respect to early life mouthing, house dust and dietary exposure. A potential fifth step of the method involves the calculation of exposure doses for risk assessment which is described but outside the scope for the current case study. In summary, the methodology has been used to synthesize the available information to identify key sources of early life exposure to DEHP. PMID:27376320

  9. Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. A meaningful adaption will result in high-fidelity and robust adapted core simulator models. To perform adaption, we propose an inverse theory approach in which the multitudes of input data to core simulators, i.e., reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement withmore » measured observables while keeping core simulator models unadapted. At first glance, devising such adaption for typical core simulators with millions of input and observables data would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulator models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulator input data present a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. The methodologies of adaptive simulation are well established in the literature of data adjustment. We adopt the same general framework for data adjustment; however, we refrain from solving the fundamental adjustment equations in a conventional manner. We demonstrate the use of our so-called Efficient Subspace Methods (ESMs) to overcome the computational and storage burdens associated with the core adaption problem. We illustrate the successful use of ESM-based adaptive techniques for a typical boiling water reactor core simulator adaption problem.« less

  10. Analyzing the sensitivity of a flood risk assessment model towards its input data

    NASA Astrophysics Data System (ADS)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.

  11. Textual Enhancement of Input: Issues and Possibilities

    ERIC Educational Resources Information Center

    Han, ZhaoHong; Park, Eun Sung; Combs, Charles

    2008-01-01

    The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…

  12. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 3; Aero-Acoustic Analyses and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.

    2008-01-01

    A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.

  13. Standardised survey method for identifying catchment risks to water quality.

    PubMed

    Baker, D L; Ferguson, C M; Chier, P; Warnecke, M; Watkinson, A

    2016-06-01

    This paper describes the development and application of a systematic methodology to identify and quantify risks in drinking water and recreational catchments. The methodology assesses microbial and chemical contaminants from both diffuse and point sources within a catchment using Escherichia coli, protozoan pathogens and chemicals (including fuel and pesticides) as index contaminants. Hazard source information is gathered by a defined sanitary survey process involving use of a software tool which groups hazards into six types: sewage infrastructure, on-site sewage systems, industrial, stormwater, agriculture and recreational sites. The survey estimates the likelihood of the site affecting catchment water quality, and the potential consequences, enabling the calculation of risk for individual sites. These risks are integrated to calculate a cumulative risk for each sub-catchment and the whole catchment. The cumulative risks process accounts for the proportion of potential input sources surveyed and for transfer of contaminants from upstream to downstream sub-catchments. The output risk matrices show the relative risk sources for each of the index contaminants, highlighting those with the greatest impact on water quality at a sub-catchment and catchment level. Verification of the sanitary survey assessments and prioritisation is achieved by comparison with water quality data and microbial source tracking.

  14. The use of the SRIM code for calculation of radiation damage induced by neutrons

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi

    2017-12-01

    Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.

  15. Automated placement of interfaces in conformational kinetics calculations using machine learning

    NASA Astrophysics Data System (ADS)

    Grazioli, Gianmarc; Butts, Carter T.; Andricioaei, Ioan

    2017-10-01

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  16. Automated placement of interfaces in conformational kinetics calculations using machine learning.

    PubMed

    Grazioli, Gianmarc; Butts, Carter T; Andricioaei, Ioan

    2017-10-21

    Several recent implementations of algorithms for sampling reaction pathways employ a strategy for placing interfaces or milestones across the reaction coordinate manifold. Interfaces can be introduced such that the full feature space describing the dynamics of a macromolecule is divided into Voronoi (or other) cells, and the global kinetics of the molecular motions can be calculated from the set of fluxes through the interfaces between the cells. Although some methods of this type are exact for an arbitrary set of cells, in practice, the calculations will converge fastest when the interfaces are placed in regions where they can best capture transitions between configurations corresponding to local minima. The aim of this paper is to introduce a fully automated machine-learning algorithm for defining a set of cells for use in kinetic sampling methodologies based on subdividing the dynamical feature space; the algorithm requires no intuition about the system or input from the user and scales to high-dimensional systems.

  17. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  18. Computational Modeling of Mixed Solids for CO2 CaptureSorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Yuhua

    2015-01-01

    Since current technologies for capturing CO2 to fight global climate change are still too energy intensive, there is a critical need for development of new materials that can capture CO2 reversibly with acceptable energy costs. Accordingly, solid sorbents have been proposed to be used for CO2 capture applications through a reversible chemical transformation. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials has been proposed and validated. The calculated thermodynamic properties of differentmore » classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to screen only those solid materials for which lower capture energy costs are expected at the desired pressure and temperature conditions. Only those selected CO2 sorbent candidates were further considered for experimental validations. The ab initio thermodynamic technique has the advantage of identifying thermodynamic properties of CO2 capture reactions without any experimental input beyond crystallographic structural information of the solid phases involved. Such methodology not only can be used to search for good candidates from existing database of solid materials, but also can provide some guidelines for synthesis new materials. In this presentation, we apply our screening methodology to mixing solid systems to adjust the turnover temperature to help on developing CO2 capture Technologies.« less

  19. Computer code for single-point thermodynamic analysis of hydrogen/oxygen expander-cycle rocket engines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.; Jones, Scott M.

    1991-01-01

    This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.

  20. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  1. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  2. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  3. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  4. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  5. Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine

    DOE Data Explorer

    Liu, Xiaobing

    2016-09-21

    This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.

  6. A data driven approach using Takagi-Sugeno models for computationally efficient lumped floodplain modelling

    NASA Astrophysics Data System (ADS)

    Wolfs, Vincent; Willems, Patrick

    2013-10-01

    Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.

  7. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat @

    2014-02-01

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.

  8. Accurate description of charge transport in organic field effect transistors using an experimentally extracted density of states

    NASA Astrophysics Data System (ADS)

    Roelofs, W. S. C.; Mathijssen, S. G. J.; Janssen, R. A. J.; de Leeuw, D. M.; Kemerink, M.

    2012-02-01

    The width and shape of the density of states (DOS) are key parameters to describe the charge transport of organic semiconductors. Here we extract the DOS using scanning Kelvin probe microscopy on a self-assembled monolayer field effect transistor (SAMFET). The semiconductor is only a single monolayer which has allowed extraction of the DOS over a wide energy range, pushing the methodology to its fundamental limit. The measured DOS consists of an exponential distribution of deep states with additional localized states on top. The charge transport has been calculated in a generic variable range-hopping model that allows any DOS as input. We show that with the experimentally extracted DOS an excellent agreement between measured and calculated transfer curves is obtained. This shows that detailed knowledge of the density of states is a prerequisite to consistently describe the transfer characteristics of organic field effect transistors.

  9. Seismic behavior of a low-rise horizontal cylindrical tank

    NASA Astrophysics Data System (ADS)

    Fiore, Alessandra; Rago, Carlo; Vanzi, Ivo; Greco, Rita; Briseghella, Bruno

    2018-05-01

    Cylindrical storage tanks are widely used for various types of liquids, including hazardous contents, thus requiring suitable and careful design for seismic actions. The study herein presented deals with the dynamic analysis of a ground-based horizontal cylindrical tank containing butane and with its safety verification. The analyses are based on a detailed finite element (FE) model; a simplified one-degree-of-freedom idealization is also set up and used for verification of the FE results. Particular attention is paid to sloshing and asynchronous seismic input effects. Sloshing effects are investigated according to the current literature state of the art. An efficient methodology based on an "impulsive-convective" decomposition of the container-fluid motion is adopted for the calculation of the seismic force. The effects of asynchronous ground motion are studied by suitable pseudo-static analyses. Comparison between seismic action effects, obtained with and without consideration of sloshing and asynchronous seismic input, shows a rather important influence of these conditions on the final results.

  10. Data and results of a laboratory investigation of microprocessor upset caused by simulated lightning-induced analog transients

    NASA Technical Reports Server (NTRS)

    Belcastro, C. M.

    1984-01-01

    A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.

  11. Surface models for coupled modelling of runoff and sewer flow in urban areas.

    PubMed

    Ettrich, N; Steiner, K; Thomas, M; Rothe, R

    2005-01-01

    Traditional methods fail for the purpose of simulating the complete flow process in urban areas as a consequence of heavy rainfall and as required by the European Standard EN-752 since the bi-directional coupling between sewer and surface is not properly handled. The new methodology, developed in the EUREKA-project RisUrSim, solves this problem by carrying out the runoff on the basis of shallow water equations solved on high-resolution surface grids. Exchange nodes between the sewer and the surface, like inlets and manholes, are located in the computational grid and water leaving the sewer in case of surcharge is further distributed on the surface. Dense topographical information is needed to build a model suitable for hydrodynamic runoff calculations; in urban areas, in addition, many line-shaped elements like houses, curbs, etc. guide the runoff of water and require polygonal input. Airborne data collection methods offer a great chance to economically gather densely sampled input data.

  12. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  13. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  14. 40 CFR 75.71 - Specific provisions for monitoring NOX and heat input for the purpose of calculating NOX mass...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and heat input for the purpose of calculating NOX mass emissions. 75.71 Section 75.71 Protection of... MONITORING NOX Mass Emissions Provisions § 75.71 Specific provisions for monitoring NOX and heat input for the purpose of calculating NOX mass emissions. (a) Coal-fired units. The owner or operator of a coal...

  15. Methodological accuracy of image-based electron density assessment using dual-energy computed tomography.

    PubMed

    Möhler, Christian; Wohlfahrt, Patrick; Richter, Christian; Greilich, Steffen

    2017-06-01

    Electron density is the most important tissue property influencing photon and ion dose distributions in radiotherapy patients. Dual-energy computed tomography (DECT) enables the determination of electron density by combining the information on photon attenuation obtained at two different effective x-ray energy spectra. Most algorithms suggested so far use the CT numbers provided after image reconstruction as input parameters, i.e., are imaged-based. To explore the accuracy that can be achieved with these approaches, we quantify the intrinsic methodological and calibration uncertainty of the seemingly simplest approach. In the studied approach, electron density is calculated with a one-parametric linear superposition ('alpha blending') of the two DECT images, which is shown to be equivalent to an affine relation between the photon attenuation cross sections of the two x-ray energy spectra. We propose to use the latter relation for empirical calibration of the spectrum-dependent blending parameter. For a conclusive assessment of the electron density uncertainty, we chose to isolate the purely methodological uncertainty component from CT-related effects such as noise and beam hardening. Analyzing calculated spectrally weighted attenuation coefficients, we find universal applicability of the investigated approach to arbitrary mixtures of human tissue with an upper limit of the methodological uncertainty component of 0.2%, excluding high-Z elements such as iodine. The proposed calibration procedure is bias-free and straightforward to perform using standard equipment. Testing the calibration on five published data sets, we obtain very small differences in the calibration result in spite of different experimental setups and CT protocols used. Employing a general calibration per scanner type and voltage combination is thus conceivable. Given the high suitability for clinical application of the alpha-blending approach in combination with a very small methodological uncertainty, we conclude that further refinement of image-based DECT-algorithms for electron density assessment is not advisable. © 2017 American Association of Physicists in Medicine.

  16. Prediction of XV-15 tilt rotor discrete frequency aeroacoustic noise with WOPWOP

    NASA Technical Reports Server (NTRS)

    Coffen, Charles D.; George, Albert R.

    1990-01-01

    The results, methodology, and conclusions of noise prediction calculations carried out to study several possible discrete frequency harmonic noise mechanisms of the XV-15 Tilt Rotor Aircraft in hover and helicopter mode forward flight are presented. The mechanisms studied were thickness and loading noise. In particular, the loading noise caused by flow separation and the fountain/ground plane effect were predicted with calculations made using WOPWOP, a noise prediction program developed by NASA Langley. The methodology was to model the geometry and aerodynamics of the XV-15 rotor blades in hover and steady level flight and then create corresponding FORTRAN subroutines which were used an input for WOPWOP. The models are described and the simplifying assumptions made in creating them are evaluated, and the results of the computations are presented. The computations lead to the following conclusions: The fountain/ground plane effect is an important source of aerodynamic noise for the XV-15 in hover. Unsteady flow separation from the airfoil passing through the fountain at high angles of attack significantly affects the predicted sound spectra and may be an important noise mechanism for the XV-15 in hover mode. The various models developed did not predict the sound spectra in helicopter forward flight. The experimental spectra indicate the presence of blade vortex interactions which were not modeled in these calculations. A need for further study and development of more accurate aerodynamic models, including unsteady stall in hover and blade vortex interactions in forward flight.

  17. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  18. Designing the Alluvial Riverbeds in Curved Paths

    NASA Astrophysics Data System (ADS)

    Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina

    2017-10-01

    The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.

  19. Development of a semi-automated model identification and calibration tool for conceptual modelling of sewer systems.

    PubMed

    Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick

    2013-01-01

    Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.

  20. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  1. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  2. Probabilistic analysis of the torsional effects on the tall building resistance due to earthquake even

    NASA Astrophysics Data System (ADS)

    Králik, Juraj; Králik, Juraj

    2017-07-01

    The paper presents the results from the deterministic and probabilistic analysis of the accidental torsional effect of reinforced concrete tall buildings due to earthquake even. The core-column structural system was considered with various configurations in plane. The methodology of the seismic analysis of the building structures in Eurocode 8 and JCSS 2000 is discussed. The possibilities of the utilization the LHS method to analyze the extensive and robust tasks in FEM is presented. The influence of the various input parameters (material, geometry, soil, masses and others) is considered. The deterministic and probability analysis of the seismic resistance of the structure was calculated in the ANSYS program.

  3. Uncovering productive morphosyntax in French-learning toddlers: a multidimensional methodology perspective.

    PubMed

    Barrière, Isabelle; Goyet, Louise; Kresh, Sarah; Legendre, Géraldine; Nazzi, Thierry

    2016-09-01

    The present study applies a multidimensional methodological approach to the study of the acquisition of morphosyntax. It focuses on evaluating the degree of productivity of an infrequent subject-verb agreement pattern in the early acquisition of French and considers the explanatory role played by factors such as input frequency, semantic transparency of the agreement markers, and perceptual factors in accounting for comprehension of agreement in number (singular vs. plural) in an experimental setting. Results on a pointing task involving pseudo-verbs demonstrate significant comprehension of both singular and plural agreement in children aged 2;6. The experimental results are shown not to reflect input frequency, input marker reliability on its own, or lexically driven knowledge. We conclude that toddlers have knowledge of subject-verb agreement at age 2;6 which is abstract and productive despite its paucity in the input.

  4. Composition Optimization of Lithium-Based Ternary Alloy Blankets for Fusion Reactors

    NASA Astrophysics Data System (ADS)

    Jolodosky, Alejandra

    The goal of this dissertation is to examine the neutronic properties of a novel type of fusion reactor blanket material in the form of lithium-based ternary alloys. Pure liquid lithium, first proposed as a blanket for fusion reactors, is utilized as both a tritium breeder and a coolant. It has many attractive features such as high heat transfer and low corrosion properties, but most importantly, it has a very high tritium solubility and results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water and presents plant safety concerns including degradation of the concrete containment structure. The work of this thesis began as a collaboration with Lawrence Livermore National Laboratory in an effort to develop a lithium-based ternary alloy that can maintain the beneficial properties of lithium while reducing the reactivity concerns. The first studies down-selected alloys based on the analysis and performance of both neutronic and activation characteristics. First, 3-D Monte Carlo calculations were performed to evaluate two main neutronics performance parameters for the blanket: tritium breeding ratio (TBR), and energy multiplication factor (EMF). Alloys with adequate results based on TBR and EMF calculations were considered for activation analysis. Activation simulations were executed with 50 years of irradiation and 300 years of cooling. It was discovered that bismuth is a poor choice due to achieving the highest decay heat, contact dose rates, and accident doses. In addition, it does not meet the waste disposal ratings (WDR). The straightforward approach to obtain Monte Carlo TBR and EMF results required 231 simulations per alloy and became computationally expensive, time consuming, and inefficient. Consequently, alternate methods were pursued. A collision history-based methodology recently developed for the Monte Carlo code Serpent, calculates perturbation effects on practically any quantity of interest. This allows multiple responses to be calculated by perturbing the input parameter without having to directly perform separate calculations. The approach is strictly created for critical systems, but was utilized as the basis of a new methodology implemented for fixed source problems, known as Exact Perturbation Theory (EPT). EPT can calculate the tritium breeding ratio response, caused by a perturbation in the composition of the ternary alloy. The downfall of EPT methodology is that it cannot account for the collision history at large perturbations and thus, produces results with high uncertainties. Preliminary analysis for EPT with Serpent for a LiPbBa alloy demonstrated that 25 simulations per ternary must be completed so that most uncertainties calculated at large perturbations do not exceed 0.05. To reduce the uncertainties of the results, generalized least squares (GSL) method was implemented, to replace imprecise TBR results with more accurate ones. It was demonstrated that a combination of EPT Serpent calculations with the application of GLS for results with high uncertainties is the most effective and produces values with the highest fidelity. The scheme finds an alloy composition that has a TBR within a range of interest, while imposing constraint on the EMF, and a requirement to minimize lithium concentration. It involved a three-level iteration process with each level zooming in closer on the area of interest to fine tune the correct composition. Both alloys studied, LiPbBa and LiSnZn, had optimized compositions close to the leftmost edge of the ternary, increasing the complexity of optimization due to the highly uncertain results found in these regions. Additional GPT methodologies were considered for optimization studies, specifically with the use of deterministic codes. Currently, an optimization deterministic code, SMORES, is available in the SCALE code package, but only for critical systems. Subsequently, it was desired to change this code to solve problems for fusion reactors similarly to what was done in SWAN. So far, the fixed and adjoint source declaration and definition was added to the input file. As a result, alterations were made to the source code so that it can read in and utilize the new input information. Due to time constraints, only a detailed outline has been created that includes the steps one has to take to make the transition of SMORES from critical systems to fixed source problems. Additional time constraints limited the goal to perform chemical reactivity experiments on candidate alloys. Nevertheless, a review of past experiments was done and it was determined that large-scale experiments seem more appropriate for the purpose of this work, as they would better depict how the alloys would behave in the actual reactor environment. Both air and water reactions should be considered when examining the potential chemical reactions of the lithium alloy.

  5. Pragmatic geometric model evaluation

    NASA Astrophysics Data System (ADS)

    Pamer, Robert

    2015-04-01

    Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to calculate basically two model variations that can be seen as geometric extremes of all available input data. This does not lead to a probability distribution for the spatial position of geometric elements but it defines zones of major (or minor resp.) geometric variations due to data uncertainty. Both model evaluations are then analyzed together to give ranges of possible model outcomes in metric units.

  6. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  7. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  8. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  9. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  10. 43 CFR 11.83 - Damage determination phase-use value methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subject to standards governing its application? (vi) Are methodological inputs and assumptions supported... used for unique or difficult design and estimating conditions. This methodology requires the construction of a simple design for which an estimate can be found and applied to the unique or difficult...

  11. 40 CFR 75.71 - Specific provisions for monitoring NOX and heat input for the purpose of calculating NOX mass...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and heat input for the purpose of calculating NOX mass emissions. 75.71 Section 75.71 Protection of... MONITORING NOX Mass Emissions Provisions § 75.71 Specific provisions for monitoring NOX and heat input for... and for a flow monitoring system and an O2 or CO2 diluent gas monitoring system to measure heat input...

  12. Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and

    Science.gov Websites

    Center: Vehicle Cost Calculator Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Assumptions and Methodology on Google Bookmark Alternative Fuels

  13. Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors

    USDA-ARS?s Scientific Manuscript database

    Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...

  14. Measuring geographical accessibility to rural and remote health care services: Challenges and considerations.

    PubMed

    Shah, Tayyab Ikram; Milosavljevic, Stephan; Bath, Brenna

    2017-06-01

    This research is focused on methodological challenges and considerations associated with the estimation of the geographical aspects of access to healthcare with a focus on rural and remote areas. With the assumption that GIS-based accessibility measures for rural healthcare services will vary across geographic units of analysis and estimation techniques, which could influence the interpretation of spatial access to rural healthcare services. Estimations of geographical accessibility depend on variations of the following three parameters: 1) quality of input data; 2) accessibility method; and 3) geographical area. This research investigated the spatial distributions of physiotherapists (PTs) in comparison to family physicians (FPs) across Saskatchewan, Canada. The three-steps floating catchment areas (3SFCA) method was applied to calculate the accessibility scores for both PT and FP services at two different geographical units. A comparison of accessibility scores to simple healthcare provider-to-population ratios was also calculated. The results vary considerably depending on the accessibility methods used and the choice of geographical area unit for measuring geographical accessibility for both FP and PT services. These findings raise intriguing questions regarding the nature and extent of technical issues and methodological considerations that can affect GIS-based measures in health services research and planning. This study demonstrates how the selection of geographical areal units and different methods for measuring geographical accessibility could affect the distribution of healthcare resources in rural areas. These methodological issues have implications for determining where there is reduced access that will ultimately impact health human resource priorities and policies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions

    Science.gov Websites

    Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Facebook Tweet about Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Twitter Bookmark Alternative Fuels Data Center: Vehicle Cost Calculator Widget Assumptions and Methodology on Google Bookmark

  16. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  17. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  18. Factorizing the motion sensitivity function into equivalent input noise and calculation efficiency.

    PubMed

    Allard, Rémy; Arleo, Angelo

    2017-01-01

    The photopic motion sensitivity function of the energy-based motion system is band-pass peaking around 8 Hz. Using an external noise paradigm to factorize the sensitivity into equivalent input noise and calculation efficiency, the present study investigated if the variation in photopic motion sensitivity as a function of the temporal frequency is due to a variation of equivalent input noise (e.g., early temporal filtering) or calculation efficiency (ability to select and integrate motion). For various temporal frequencies, contrast thresholds for a direction discrimination task were measured in presence and absence of noise. Up to 15 Hz, the sensitivity variation was mainly due to a variation of equivalent input noise and little variation in calculation efficiency was observed. The sensitivity fall-off at very high temporal frequencies (from 15 to 30 Hz) was due to a combination of a drop of calculation efficiency and a rise of equivalent input noise. A control experiment in which an artificial temporal integration was applied to the stimulus showed that an early temporal filter (generally assumed to affect equivalent input noise, not calculation efficiency) could impair both the calculation efficiency and equivalent input noise at very high temporal frequencies. We conclude that at the photopic luminance intensity tested, the variation of motion sensitivity as a function of the temporal frequency was mainly due to early temporal filtering, not to the ability to select and integrate motion. More specifically, we conclude that photopic motion sensitivity at high temporal frequencies is limited by internal noise occurring after the transduction process (i.e., neural noise), not by quantal noise resulting from the probabilistic absorption of photons by the photoreceptors as previously suggested.

  19. Coupled Ablation, Heat Conduction, Pyrolysis, Shape Change and Spallation of the Galileo Probe

    NASA Technical Reports Server (NTRS)

    Milos, Frank S.; Chen, Y.-K.; Rasky, Daniel J. (Technical Monitor)

    1995-01-01

    The Galileo probe enters the atmosphere of Jupiter in December 1995. This paper presents numerical methodology and detailed results of our final pre-impact calculations for the heat shield response. The calculations are performed using a highly modified version of a viscous shock layer code with massive radiation coupled with a surface thermochemical ablation and spallation model and with the transient in-depth thermal response of the charring and ablating heat shield. The flowfield is quasi-steady along the trajectory, but the heat shield thermal response is dynamic. Each surface node of the VSL grid is coupled with a one-dimensional thermal response calculation. The thermal solver includes heat conduction, pyrolysis, and grid movement owing to surface recession. Initial conditions for the heat shield temperature and density were obtained from the high altitude rarefied-flow calculations of Haas and Milos. Galileo probe surface temperature, shape, mass flux, and element flux are all determined as functions of time along the trajectory with spallation varied parametrically. The calculations also estimate the in-depth density and temperature profiles for the heat shield. All this information is required to determine the time-dependent vehicle mass and drag coefficient which are necessary inputs for the atmospheric reconstruction experiment on board the probe.

  20. Characteristic features of determining the labor input and estimated cost of the development and manufacture of equipment

    NASA Technical Reports Server (NTRS)

    Kurmanaliyev, T. I.; Breslavets, A. V.

    1974-01-01

    The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.

  1. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  2. Feedback Loop of Data Infilling Using Model Result of Actual Evapotranspiration from Satellites and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Murdi Hartanto, Isnaeni; Alexandridis, Thomas K.; van Andel, Schalk Jan; Solomatine, Dimitri

    2014-05-01

    Using satellite data in a hydrological model has long been occurring in modelling of hydrological processes, as a source of low cost regular data. The methods range from using satellite products as direct input, model validation, and data assimilation. However, the satellite data frequently face the missing value problem, whether due to the cloud cover or the limited temporal coverage. The problem could seriously affect its usefulness in hydrological model, especially if the model uses it as direct input, so data infilling becomes one of the important parts in the whole modelling exercise. In this research, actual evapotranspiration product from satellite is directly used as input into a spatially distributed hydrological model, and validated by comparing the catchment's end discharge with measured data. The instantaneous actual evapotranspiration is estimated from MODIS satellite images using a variation of the energy balance model for land (SEBAL). The eight-day cumulative actual evapotranspiration is then obtained by a temporal integration that uses the reference evapotranspiration calculated from meteorological data [1]. However, the above method cannot fill in a cell if the cell is constantly having no-data value during the eight-day periods. The hydrological model requires full set of data without no-data cells, hence, the no-data cells in the satellite's evapotranspiration map need to be filled in. In order to fills the no-data cells, an output of hydrological model is used. The hydrological model is firstly run with reference evapotranspiration as input to calculate discharge and actual evapotranspiration. The no-data cells in the eight-day cumulative map from the satellite are then filled in with the output of the first run of hydrological model. The final data is then used as input in a hydrological model to calculate discharge, thus creating a loop. The method is applied in the case study of Rijnland, the Netherlands where in the winter, cloud cover is persistent and leads to many no-data cells in the satellite products. The Rijnland area is a low-lying area with tight water system control. The satellite data is used as input in a SIMGRO model, a spatially distributed hydrological model that is able to handle the controlled water system and that is suitable for the low-lying areas in the Netherlands. The application in the Rijnland area gives overall a good result of total discharge. By using the method, the hydrological model is improved in term of spatial hydrological state, where the original model is only calibrated to discharge in one location. [1] Alexandridis, T.K., Cherif, I., Chemin, Y., Silleos, G.N., Stavrinos, E. & Zalidis, G.C. (2009). Integrated Methodology for Estimating Water Use in Mediterranean Agricultural Areas. Remote Sensing. 1

  3. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  4. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  5. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  6. 10 CFR 766.102 - Calculation methodology.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...

  7. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  8. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  9. Development and testing of controller performance evaluation methodology for multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1991-01-01

    Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.

  10. Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fiber powder

    PubMed Central

    Su, Jing; Vielnascher, Robert; Silva, Carla; Cavaco-Paulo, Artur; Guebitz, Georg M.

    2018-01-01

    Ultrasound-assisted extraction of hemicellulose and phenolic compounds from bamboo bast fibre powder was investigated. The effect of ultrasonic probe depth and power input parameters on the type and amount of products extracted was assessed. The results of input energy and radical formation correlated with the calculated values for the anti-nodal point (λ/4; 16.85 mm, maximum amplitude) of the ultrasonic wave in aqueous medium. Ultrasonic treatment at optimum probe depth of 15 mm improve 2.6-fold the extraction efficiencies of hemicellulose and phenolic lignin compounds from bamboo bast fibre powder. LC-Ms-Tof (liquid chromatography-mass spectrometry-time of flight) analysis indicated that ultrasound led to the extraction of coniferyl alcohol, sinapyl alcohol, vanillic acid, cellobiose, in contrast to boiling water extraction only. At optimized conditions, ultrasound caused the formation of radicals confirmed by the presence of (+)-pinoresinol which resulted from the radical coupling of coniferyl alcohol. Ultrasounds revealed to be an efficient methodology for the extraction of hemicellulosic and phenolic compounds from woody bamboo without the addition of harmful solvents. PMID:29856764

  11. Micrometeoroid and Orbital Debris Risk Assessment With Bumper 3

    NASA Technical Reports Server (NTRS)

    Hyde, J.; Bjorkman, M.; Christiansen, E.; Lear, D.

    2017-01-01

    The Bumper 3 computer code is the primary tool used by NASA for micrometeoroid and orbital debris (MMOD) risk analysis. Bumper 3 (and its predecessors) have been used to analyze a variety of manned and unmanned spacecraft. The code uses NASA's latest micrometeoroid (MEM-R2) and orbital debris (ORDEM 3.0) environment definition models and is updated frequently with ballistic limit equations that describe the hypervelocity impact performance of spacecraft materials. The Bumper 3 program uses these inputs along with a finite element representation of spacecraft geometry to provide a deterministic calculation of the expected number of failures. The Bumper 3 software is configuration controlled by the NASA/JSC Hypervelocity Impact Technology (HVIT) Group. This paper will demonstrate MMOD risk assessment techniques with Bumper 3 used by NASA's HVIT Group. The Permanent Multipurpose Module (PMM) was added to the International Space Station in 2011. A Bumper 3 MMOD risk assessment of this module will show techniques used to create the input model and assign the property IDs. The methodology used to optimize the MMOD shielding for minimum mass while still meeting structural penetration requirements will also be demonstrated.

  12. Comparison of two optimized readout chains for low light CIS

    NASA Astrophysics Data System (ADS)

    Boukhayma, A.; Peizerat, A.; Dupret, A.; Enz, C.

    2014-03-01

    We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e-. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).

  13. High resolution production water footprints of the United States

    NASA Astrophysics Data System (ADS)

    Marston, L.; Yufei, A.; Konar, M.; Mekonnen, M.; Hoekstra, A. Y.

    2017-12-01

    The United States is the largest producer and consumer of goods and services in the world. Rainfall, surface water supplies, and groundwater aquifers represent a fundamental input to this economic production. Despite the importance of water resources to economic activity, we do not have consistent information on water use for specific locations and economic sectors. A national, high-resolution database of water use by sector would provide insight into US utilization and dependence on water resources for economic production. To this end, we calculate the water footprint of over 500 food, energy, mining, services, and manufacturing industries and goods produced in the US. To do this, we employ a data intensive approach that integrates water footprint and input-output techniques into a novel methodological framework. This approach enables us to present the most detailed and comprehensive water footprint analysis of any country to date. This study broadly contributes to our understanding of water in the US economy, enables supply chain managers to assess direct and indirect water dependencies, and provides opportunities to reduce water use through benchmarking.

  14. Comparison of Building Energy Modeling Programs: Building Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Dandan; Hong, Tianzhen; Yan, Da

    This technical report presented the methodologies, processes, and results of comparing three Building Energy Modeling Programs (BEMPs) for load calculations: EnergyPlus, DeST and DOE-2.1E. This joint effort, between Lawrence Berkeley National Laboratory, USA and Tsinghua University, China, was part of research projects under the US-China Clean Energy Research Center on Building Energy Efficiency (CERC-BEE). Energy Foundation, an industrial partner of CERC-BEE, was the co-sponsor of this study work. It is widely known that large discrepancies in simulation results can exist between different BEMPs. The result is a lack of confidence in building simulation amongst many users and stakeholders. In themore » fields of building energy code development and energy labeling programs where building simulation plays a key role, there are also confusing and misleading claims that some BEMPs are better than others. In order to address these problems, it is essential to identify and understand differences between widely-used BEMPs, and the impact of these differences on load simulation results, by detailed comparisons of these BEMPs from source code to results. The primary goal of this work was to research methods and processes that would allow a thorough scientific comparison of the BEMPs. The secondary goal was to provide a list of strengths and weaknesses for each BEMP, based on in-depth understandings of their modeling capabilities, mathematical algorithms, advantages and limitations. This is to guide the use of BEMPs in the design and retrofit of buildings, especially to support China’s building energy standard development and energy labeling program. The research findings could also serve as a good reference to improve the modeling capabilities and applications of the three BEMPs. The methodologies, processes, and analyses employed in the comparison work could also be used to compare other programs. The load calculation method of each program was analyzed and compared to identify the differences in solution algorithms, modeling assumptions and simplifications. Identifying inputs of each program and their default values or algorithms for load simulation was a critical step. These tend to be overlooked by users, but can lead to large discrepancies in simulation results. As weather data was an important input, weather file formats and weather variables used by each program were summarized. Some common mistakes in the weather data conversion process were discussed. ASHRAE Standard 140-2007 tests were carried out to test the fundamental modeling capabilities of the load calculations of the three BEMPs, where inputs for each test case were strictly defined and specified. The tests indicated that the cooling and heating load results of the three BEMPs fell mostly within the range of spread of results from other programs. Based on ASHRAE 140-2007 test results, the finer differences between DeST and EnergyPlus were further analyzed by designing and conducting additional tests. Potential key influencing factors (such as internal gains, air infiltration, convection coefficients of windows and opaque surfaces) were added one at a time to a simple base case with an analytical solution, to compare their relative impacts on load calculation results. Finally, special tests were designed and conducted aiming to ascertain the potential limitations of each program to perform accurate load calculations. The heat balance module was tested for both single and double zone cases. Furthermore, cooling and heating load calculations were compared between the three programs by varying the heat transfer between adjacent zones, the occupancy of the building, and the air-conditioning schedule.« less

  15. Numerical Investigation of the Influence of the Input Air Irregularity on the Performance of Turbofan Jet Engine

    NASA Astrophysics Data System (ADS)

    Novikova, Y.; Zubanov, V.

    2018-01-01

    The article describes the numerical investigation of the input air irregularity influence of turbofan engine on its characteristics. The investigated fan has a wide-blade, an inlet diameter about 2 meters, a pressure ratio about 1.6 and the bypass ratio about 4.8. The flow irregularity was simulated by the flap input in the fan inlet channel. Input of flap was carried out by an amount of 10 to 22,5% of the input channel diameter with increments of 2,5%. A nonlinear harmonic analysis (NLH-analysis) of NUMECA Fine/Turbo software was used to study the flow irregularity. The behavior of the calculated LPC characteristics repeats the experiment behavior, but there is a quantitative difference: the calculated efficiency and pressure ratio of booster consistent with the experimental data within 3% and 2% respectively, the calculated efficiency and pressure ratio of fan duct - within 4% and 2.5% respectively. An increasing the level of air irregularity in the input stage of the fan reduces the calculated mass flow, maximum pressure ratio and efficiency. With the value of flap input 12.5%, reducing the maximum air flow is 1.44%, lowering the maximum pressure ratio is 2.6%, efficiency decreasing is 3.1%.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Paulson, Patrick R.; Danielson, Gary R.

    We describe a methodology and architecture to support the development of games in a predictive analytics context. These games serve as part of an overall family of systems designed to gather input knowledge, calculate results of complex predictive technical and social models, and explore those results in an engaging fashion. The games provide an environment shaped and driven in part by the outputs of the models, allowing users to exert influence over a limited set of parameters, and displaying the results when those actions cause changes in the underlying model. We have crafted a prototype system in which we aremore » implementing test versions of games driven by models in such a fashion, using a flexible architecture to allow for future continuation and expansion of this work.« less

  17. Foundational Performance Analyses of Pressure Gain Combustion Thermodynamic Benefits for Gas Turbines

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Kaemming, Thomas A.

    2012-01-01

    A methodology is described whereby the work extracted by a turbine exposed to the fundamentally nonuniform flowfield from a representative pressure gain combustor (PGC) may be assessed. The method uses an idealized constant volume cycle, often referred to as an Atkinson or Humphrey cycle, to model the PGC. Output from this model is used as input to a scalable turbine efficiency function (i.e., a map), which in turn allows for the calculation of useful work throughout the cycle. Integration over the entire cycle yields mass-averaged work extraction. The unsteady turbine work extraction is compared to steady work extraction calculations based on various averaging techniques for characterizing the combustor exit pressure and temperature. It is found that averages associated with momentum flux (as opposed to entropy or kinetic energy) provide the best match. This result suggests that momentum-based averaging is the most appropriate figure-of-merit to use as a PGC performance metric. Using the mass-averaged work extraction methodology, it is also found that the design turbine pressure ratio for maximum work extraction is significantly higher than that for a turbine fed by a constant pressure combustor with similar inlet conditions and equivalence ratio. Limited results are presented whereby the constant volume cycle is replaced by output from a detonation-based PGC simulation. The results in terms of averaging techniques and design pressure ratio are similar.

  18. A geometry package for generation of input data for a three-dimensional potential-flow program

    NASA Technical Reports Server (NTRS)

    Halsey, N. D.; Hess, J. L.

    1978-01-01

    The preparation of geometric data for input to three-dimensional potential flow programs was automated and simplified by a geometry package incorporated into the NASA Langley version of the 3-D lifting potential flow program. Input to the computer program for the geometry package consists of a very sparse set of coordinate data, often with an order of magnitude of fewer points than required for the actual potential flow calculations. Isolated components, such as wings, fuselages, etc. are paneled automatically, using one of several possible element distribution algorithms. Curves of intersection between components are calculated, using a hybrid curve-fit/surface-fit approach. Intersecting components are repaneled so that adjacent elements on either side of the intersection curves line up in a satisfactory manner for the potential-flow calculations. Many cases may be run completely (from input, through the geometry package, and through the flow calculations) without interruption. Use of the package significantly reduces the time and expense involved in making three-dimensional potential flow calculations.

  19. 76 FR 34270 - Federal-State Extended Benefits Program-Methodology for Calculating “on” or “off” Total...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-13

    ...--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate Indicators for Purposes of Determining...'' or ``off'' total unemployment rate (TUR) indicators to determine when extended benefit (EB) periods...-State Extended Benefits Program--Methodology for Calculating ``on'' or ``off'' Total Unemployment Rate...

  20. Advection and dispersion heat transport mechanisms in the quantification of shallow geothermal resources and associated environmental impacts.

    PubMed

    Alcaraz, Mar; García-Gil, Alejandro; Vázquez-Suñé, Enric; Velasco, Violeta

    2016-02-01

    Borehole Heat Exchangers (BHEs) are increasingly being used to exploit shallow geothermal energy. This paper presents a new methodology to provide a response to the need for a regional quantification of the geothermal potential that can be extracted by BHEs and the associated environmental impacts. A set of analytical solutions facilitates accurate calculation of the heat exchange of BHEs with the ground and its environmental impacts. For the first time, advection and dispersion heat transport mechanisms and the temporal evolution from the start of operation of the BHE are taken into account in the regional estimation of shallow geothermal resources. This methodology is integrated in a GIS environment, which facilitates the management of input and output data at a regional scale. An example of the methodology's application is presented for Barcelona, in Spain. As a result of the application, it is possible to show the strengths and improvements of this methodology in the development of potential maps of low temperature geothermal energy as well as maps of environmental impacts. The minimum and maximum energy potential values for the study site are 50 and 1800 W/m(2) for a drilled depth of 100 m, proportionally to Darcy velocity. Regarding to thermal impacts, the higher the groundwater velocity and the energy potential, the higher the size of the thermal plume after 6 months of exploitation, whose length ranges from 10 to 27 m long. A sensitivity analysis was carried out in the calculation of heat exchange rate and its impacts for different scenarios and for a wide range of Darcy velocities. The results of this analysis lead to the conclusion that the consideration of dispersion effects and temporal evolution of the exploitation prevent significant differences up to a factor 2.5 in the heat exchange rate accuracy and up to several orders of magnitude in the impacts generated. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Risk analysis of technological hazards: Simulation of scenarios and application of a local vulnerability index.

    PubMed

    Sanchez, E Y; Represa, S; Mellado, D; Balbi, K B; Acquesta, A D; Colman Lerner, J E; Porta, A A

    2018-06-15

    The potential impact of a technological accident can be assessed by risk estimation. Taking this into account, the latent or potential condition can be warned and mitigated. In this work we propose a methodology to estimate risk of technological hazards, focused on two components. The first one is the processing of meteorological databases to define the most probably and conservative scenario of study, and the second one, is the application of a local social vulnerability index to classify the population. In this case of study, the risk was estimated for a hypothetical release of liquefied ammonia in a meat-packing industry in the city of La Plata, Argentina. The method consists in integrating the simulated toxic threat zone with ALOHA software, and the layer of sociodemographic classification of the affected population. The results show the areas associated with higher risks of exposure to ammonia, which are worth being addressed for the prevention of disasters in the region. Advantageously, this systemic approach is methodologically flexible as it provides the possibility of being applied in various scenarios based on the available information of both, the exposed population and its meteorology. Furthermore, this methodology optimizes the processing of the input data and its calculation. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. How Methodologic Differences Affect Results of Economic Analyses: A Systematic Review of Interferon Gamma Release Assays for the Diagnosis of LTBI

    PubMed Central

    Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick

    2013-01-01

    Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412

  3. Performance evaluation of contrast-detail in full field digital mammography systems using ideal (Hotelling) observer vs. conventional automated analysis of CDMAM images for quality control of contrast-detail characteristics.

    PubMed

    Delakis, Ioannis; Wise, Robert; Morris, Lauren; Kulama, Eugenia

    2015-11-01

    The purpose of this work was to evaluate the contrast-detail performance of full field digital mammography (FFDM) systems using ideal (Hotelling) observer Signal-to-Noise Ratio (SNR) methodology and ascertain whether it can be considered an alternative to the conventional, automated analysis of CDMAM phantom images. Five FFDM units currently used in the national breast screening programme were evaluated, which differed with respect to age, detector, Automatic Exposure Control (AEC) and target/filter combination. Contrast-detail performance was analysed using CDMAM and ideal observer SNR methodology. The ideal observer SNR was calculated for input signal originating from gold discs of varying thicknesses and diameters, and then used to estimate the threshold gold thickness for each diameter as per CDMAM analysis. The variability of both methods and the dependence of CDMAM analysis on phantom manufacturing discrepancies also investigated. Results from both CDMAM and ideal observer methodologies were informative differentiators of FFDM systems' contrast-detail performance, displaying comparable patterns with respect to the FFDM systems' type and age. CDMAM results suggested higher threshold gold thickness values compared with the ideal observer methodology, especially for small-diameter details, which can be attributed to the behaviour of the CDMAM phantom used in this study. In addition, ideal observer methodology results showed lower variability than CDMAM results. The Ideal observer SNR methodology can provide a useful metric of the FFDM systems' contrast detail characteristics and could be considered a surrogate for conventional, automated analysis of CDMAM images. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Proposed Risk-Informed Seismic Hazard Periodic Reevaluation Methodology for Complying with DOE Order 420.1C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kammerer, Annie

    Department of Energy (DOE) nuclear facilities must comply with DOE Order 420.1C Facility Safety, which requires that all such facilities review their natural phenomena hazards (NPH) assessments no less frequently than every ten years. The Order points the reader to Standard DOE-STD-1020-2012. In addition to providing a discussion of the applicable evaluation criteria, the Standard references other documents, including ANSI/ANS-2.29-2008 and NUREG-2117. These documents provide supporting criteria and approaches for evaluating the need to update an existing probabilistic seismic hazard analysis (PSHA). All of the documents are consistent at a high level regarding the general conceptual criteria that should bemore » considered. However, none of the documents provides step-by-step detailed guidance on the required or recommended approach for evaluating the significance of new information and determining whether or not an existing PSHA should be updated. Further, all of the conceptual approaches and criteria given in these documents deal with changes that may have occurred in the knowledge base that might impact the inputs to the PSHA, the calculated hazard itself, or the technical basis for the hazard inputs. Given that the DOE Order is aimed at achieving and assuring the safety of nuclear facilities—which is a function not only of the level of the seismic hazard but also the capacity of the facility to withstand vibratory ground motions—the inclusion of risk information in the evaluation process would appear to be both prudent and in line with the objectives of the Order. The purpose of this white paper is to describe a risk-informed methodology for evaluating the need for an update of an existing PSHA consistent with the DOE Order. While the development of the proposed methodology was undertaken as a result of assessments for specific SDC-3 facilities at Idaho National Laboratory (INL), and it is expected that the application at INL will provide a demonstration of the methodology, there is potential for general applicability to other facilities across the DOE complex. As such, both a general methodology and a specific approach intended for INL are described in this document. The general methodology proposed in this white paper is referred to as the “seismic hazard periodic review methodology,” or SHPRM. It presents a graded approach for SDC-3, SDC-4 and SDC-5 facilities that can be applied in any risk-informed regulatory environment by once risk-objectives appropriate for the framework are developed. While the methodology was developed for seismic hazard considerations, it can also be directly applied to other types of natural hazards.« less

  5. Proposed Risk-Informed Seismic Hazard Periodic Reevaluation Methodology for Complying with DOE Order 420.1C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kammerer, Annie

    Department of Energy (DOE) nuclear facilities must comply with DOE Order 420.1C Facility Safety, which requires that all such facilities review their natural phenomena hazards (NPH) assessments no less frequently than every ten years. The Order points the reader to Standard DOE-STD-1020-2012. In addition to providing a discussion of the applicable evaluation criteria, the Standard references other documents, including ANSI/ANS-2.29-2008 and NUREG-2117. These documents provide supporting criteria and approaches for evaluating the need to update an existing probabilistic seismic hazard analysis (PSHA). All of the documents are consistent at a high level regarding the general conceptual criteria that should bemore » considered. However, none of the documents provides step-by-step detailed guidance on the required or recommended approach for evaluating the significance of new information and determining whether or not an existing PSHA should be updated. Further, all of the conceptual approaches and criteria given in these documents deal with changes that may have occurred in the knowledge base that might impact the inputs to the PSHA, the calculated hazard itself, or the technical basis for the hazard inputs. Given that the DOE Order is aimed at achieving and assuring the safety of nuclear facilities—which is a function not only of the level of the seismic hazard but also the capacity of the facility to withstand vibratory ground motions—the inclusion of risk information in the evaluation process would appear to be both prudent and in line with the objectives of the Order. The purpose of this white paper is to describe a risk-informed methodology for evaluating the need for an update of an existing PSHA consistent with the DOE Order. While the development of the proposed methodology was undertaken as a result of assessments for specific SDC-3 facilities at Idaho National Laboratory (INL), and it is expected that the application at INL will provide a demonstration of the methodology, there is potential for general applicability to other facilities across the DOE complex. As such, both a general methodology and a specific approach intended for INL are described in this document. The general methodology proposed in this white paper is referred to as the “seismic hazard periodic review methodology,” or SHPRM. It presents a graded approach for SDC-3, SDC-4 and SDC-5 facilities that can be applied in any risk-informed regulatory environment once risk-objectives appropriate for the framework are developed. While the methodology was developed for seismic hazard considerations, it can also be directly applied to other types of natural hazards.« less

  6. A Practical Risk Assessment Methodology for Safety-Critical Train Control Systems

    DOT National Transportation Integrated Search

    2009-07-01

    This project proposes a Practical Risk Assessment Methodology (PRAM) for analyzing railroad accident data and assessing the risk and benefit of safety-critical train control systems. This report documents in simple steps the algorithms and data input...

  7. Methodological Framework for Analysis of Buildings-Related Programs with BEAMS, 2008

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, Douglas B.; Dirks, James A.; Hostick, Donna J.

    The U.S. Department of Energy’s (DOE’s) Office of Energy Efficiency and Renewable Energy (EERE) develops official “benefits estimates” for each of its major programs using its Planning, Analysis, and Evaluation (PAE) Team. PAE conducts an annual integrated modeling and analysis effort to produce estimates of the energy, environmental, and financial benefits expected from EERE’s budget request. These estimates are part of EERE’s budget request and are also used in the formulation of EERE’s performance measures. Two of EERE’s major programs are the Building Technologies Program (BT) and the Weatherization and Intergovernmental Program (WIP). Pacific Northwest National Laboratory (PNNL) supports PAEmore » by developing the program characterizations and other market information necessary to provide input to the EERE integrated modeling analysis as part of PAE’s Portfolio Decision Support (PDS) effort. Additionally, PNNL also supports BT by providing line-item estimates for the Program’s internal use. PNNL uses three modeling approaches to perform these analyses. This report documents the approach and methodology used to estimate future energy, environmental, and financial benefits using one of those methods: the Building Energy Analysis and Modeling System (BEAMS). BEAMS is a PC-based accounting model that was built in Visual Basic by PNNL specifically for estimating the benefits of buildings-related projects. It allows various types of projects to be characterized including whole-building, envelope, lighting, and equipment projects. This document contains an overview section that describes the estimation process and the models used to estimate energy savings. The body of the document describes the algorithms used within the BEAMS software. This document serves both as stand-alone documentation for BEAMS, and also as a supplemental update of a previous document, Methodological Framework for Analysis of Buildings-Related Programs: The GPRA Metrics Effort, (Elliott et al. 2004b). The areas most changed since the publication of that previous document are those discussing the calculation of lighting and HVAC interactive effects (for both lighting and envelope/whole-building projects). This report does not attempt to convey inputs to BEAMS or the methodology of their derivation.« less

  8. Assessing the recent estimates of the global burden of disease for ambient air pollution: Methodological changes and implications for low- and middle-income countries.

    PubMed

    Ostro, Bart; Spadaro, Joseph V; Gumy, Sophie; Mudu, Pierpaolo; Awe, Yewande; Forastiere, Francesco; Peters, Annette

    2018-06-04

    The Global Burden of Disease (GBD) is a comparative assessment of the health impact of the major and well-established risk factors, including ambient air pollution (AAP) assessed by concentrations of PM2.5 (particles less than 2.5 µm) and ozone. Over the last two decades, major improvements have emerged for two important inputs in the methodology for estimating the impacts of PM2.5: the assessment of global exposure to PM2.5 and the development of integrated exposure risk models (IERs) that relate the entire range of global exposures of PM2.5 to cause-specific mortality. As a result, the estimated annual mortality attributed to AAP increased from less than 1 million in 2000 to roughly 3 million for GBD in years 2010 and 2013, to 4.2 million for GBD 2015. However, the magnitude of the recent change and uncertainty regarding its rationale have resulted, in some cases, in skepticism and reduced confidence in the overall estimates. To understand the underlying reasons for the change in mortality, we examined the estimates for the years 2013 and 2015 to determine the quantitative implications of alternative model input assumptions. We calculated that the year 2013 estimates increased by 8% after applying the updated exposure data used in GBD 2015, and increased by 23% with the application of the updated IERs from GBD 2015. The application of both upgraded methodologies together increased the GBD 2013 estimates by 35%, or about one million deaths. We also quantified the impact of the changes in demographics and the assumed threshold level. Since the global estimates of air pollution-related deaths will continue to change over time, a clear documentation of the modifications in the methodology and their impacts is necessary. In addition, there is need for additional monitoring and epidemiological studies to reduce uncertainties in the estimates for low- and medium-income countries, which contribute to about one-half of the mortality. Copyright © 2018. Published by Elsevier Inc.

  9. Modeling brine-rock interactions in an enhanced geothermal systemdeep fractured reservoir at Soultz-Sous-Forets (France): a joint approachusing two geochemical codes: frachem and toughreact

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andre, Laurent; Spycher, Nicolas; Xu, Tianfu

    The modeling of coupled thermal, hydrological, and chemical (THC) processes in geothermal systems is complicated by reservoir conditions such as high temperatures, elevated pressures and sometimes the high salinity of the formation fluid. Coupled THC models have been developed and applied to the study of enhanced geothermal systems (EGS) to forecast the long-term evolution of reservoir properties and to determine how fluid circulation within a fractured reservoir can modify its rock properties. In this study, two simulators, FRACHEM and TOUGHREACT, specifically developed to investigate EGS, were applied to model the same geothermal reservoir and to forecast reservoir evolution using theirmore » respective thermodynamic and kinetic input data. First, we report the specifics of each of these two codes regarding the calculation of activity coefficients, equilibrium constants and mineral reaction rates. Comparisons of simulation results are then made for a Soultz-type geothermal fluid (ionic strength {approx}1.8 molal), with a recent (unreleased) version of TOUGHREACT using either an extended Debye-Hueckel or Pitzer model for calculating activity coefficients, and FRACHEM using the Pitzer model as well. Despite somewhat different calculation approaches and methodologies, we observe a reasonably good agreement for most of the investigated factors. Differences in the calculation schemes typically produce less difference in model outputs than differences in input thermodynamic and kinetic data, with model results being particularly sensitive to differences in ion-interaction parameters for activity coefficient models. Differences in input thermodynamic equilibrium constants, activity coefficients, and kinetics data yield differences in calculated pH and in predicted mineral precipitation behavior and reservoir-porosity evolution. When numerically cooling a Soultz-type geothermal fluid from 200 C (initially equilibrated with calcite at pH 4.9) to 20 C and suppressing mineral precipitation, pH values calculated with FRACHEM and TOUGHREACT/Debye-Hueckel decrease by up to half a pH unit, whereas pH values calculated with TOUGHREACT/Pitzer increase by a similar amount. As a result of these differences, calcite solubilities computed using the Pitzer formalism (the more accurate approach) are up to about 1.5 orders of magnitude lower. Because of differences in Pitzer ion-interaction parameters, the calcite solubility computed with TOUGHREACT/Pitzer is also typically about 0.5 orders of magnitude lower than that computed with FRACHEM, with the latter expected to be most accurate. In a second part of this investigation, both models were applied to model the evolution of a Soultz-type geothermal reservoir under high pressure and temperature conditions. By specifying initial conditions reflecting a reservoir fluid saturated with respect to calcite (a reasonable assumption based on field data), we found that THC reservoir simulations with the three models yield similar results, including similar trends and amounts of reservoir porosity decrease over time, thus pointing to the importance of model conceptualization. This study also highlights the critical effect of input thermodynamic data on the results of reactive transport simulations, most particularly for systems involving brines.« less

  10. Starting-Up the Irbene 16-m Fully Steerable Parabolic Antenna for Radioastronomic Observations

    NASA Astrophysics Data System (ADS)

    Bezrukov, V.; Berzinsh, A.; Gaigals, G.; Lesinsh, A.; Trokshs, J.

    2011-01-01

    The methodology proposed in the paper is based on the concept of Energy Efficiency Uninterrupted Development Cycle (EEUDC). The goal of the authors was to clarify how the district heating system (DHS) development is affected by the heat consumption. The primary emphasis was given to the hot water consumption, with its noticeable daily fluctuations as well as changes caused by those in the inhabitants' way of life. The methodology, which is in good agreement with the ideology of advanced management of DHS development, employs the ISO 14000 series of standards (widely applied in the sphere of environment management). In the work, experimental results are presented that have been obtained through monitoring the hot water consumption. The results evidence that this consumption and its usage indices correspond to the level achieved by Western (in particular, North-European) countries. This circumstance changes considerably the input data for calculation of DHS elements, making it possible to work out appropriate measures in order to improve the DHS efficiency through step-by-step replacement of the elements with high energy loss.

  11. A Methodology for Modeling Nuclear Power Plant Passive Component Aging in Probabilistic Risk Assessment under the Impact of Operating Conditions, Surveillance and Maintenance Activities

    NASA Astrophysics Data System (ADS)

    Guler Yigitoglu, Askin

    In the context of long operation of nuclear power plants (NPPs) (i.e., 60-80 years, and beyond), investigation of the aging of passive systems, structures and components (SSCs) is important to assess safety margins and to decide on reactor life extension as indicated within the U.S. Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. In the traditional probabilistic risk assessment (PRA) methodology, evaluating the potential significance of aging of passive SSCs on plant risk is challenging. Although passive SSC failure rates can be added as initiating event frequencies or basic event failure rates in the traditional event-tree/fault-tree methodology, these failure rates are generally based on generic plant failure data which means that the true state of a specific plant is not reflected in a realistic manner on aging effects. Dynamic PRA methodologies have gained attention recently due to their capability to account for the plant state and thus address the difficulties in the traditional PRA modeling of aging effects of passive components using physics-based models (and also in the modeling of digital instrumentation and control systems). Physics-based models can capture the impact of complex aging processes (e.g., fatigue, stress corrosion cracking, flow-accelerated corrosion, etc.) on SSCs and can be utilized to estimate passive SSC failure rates using realistic NPP data from reactor simulation, as well as considering effects of surveillance and maintenance activities. The objectives of this dissertation are twofold: The development of a methodology for the incorporation of aging modeling of passive SSC into a reactor simulation environment to provide a framework for evaluation of their risk contribution in both the dynamic and traditional PRA; and the demonstration of the methodology through its application to pressurizer surge line pipe weld and steam generator tubes in commercial nuclear power plants. In the proposed methodology, a multi-state physics based model is selected to represent the aging process. The model is modified via sojourn time approach to reflect the operational and maintenance history dependence of the transition rates. Thermal-hydraulic parameters of the model are calculated via the reactor simulation environment and uncertainties associated with both parameters and the models are assessed via a two-loop Monte Carlo approach (Latin hypercube sampling) to propagate input probability distributions through the physical model. The effort documented in this thesis towards this overall objective consists of : i) defining a process for selecting critical passive components and related aging mechanisms, ii) aging model selection, iii) calculating the probability that aging would cause the component to fail, iv) uncertainty/sensitivity analyses, v) procedure development for modifying an existing PRA to accommodate consideration of passive component failures, and, vi) including the calculated failure probability in the modified PRA. The proposed methodology is applied to pressurizer surge line pipe weld aging and steam generator tube degradation in pressurized water reactors.

  12. Indirect Lightning Safety Assessment Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, M M; Perkins, M P; Brown, C G

    2009-04-24

    Lightning is a safety hazard for high-explosives (HE) and their detonators. In the However, the current flowing from the strike point through the rebar of the building The methodology for estimating the risk from indirect lighting effects will be presented. It has two parts: a method to determine the likelihood of a detonation given a lightning strike, and an approach for estimating the likelihood of a strike. The results of these two parts produce an overall probability of a detonation. The probability calculations are complex for five reasons: (1) lightning strikes are stochastic and relatively rare, (2) the quality ofmore » the Faraday cage varies from one facility to the next, (3) RF coupling is inherently a complex subject, (4) performance data for abnormally stressed detonators is scarce, and (5) the arc plasma physics is not well understood. Therefore, a rigorous mathematical analysis would be too complex. Instead, our methodology takes a more practical approach combining rigorous mathematical calculations where possible with empirical data when necessary. Where there is uncertainty, we compensate with conservative approximations. The goal is to determine a conservative estimate of the odds of a detonation. In Section 2, the methodology will be explained. This report will discuss topics at a high-level. The reasons for selecting an approach will be justified. For those interested in technical details, references will be provided. In Section 3, a simple hypothetical example will be given to reinforce the concepts. While the methodology will touch on all the items shown in Figure 1, the focus of this report is the indirect effect, i.e., determining the odds of a detonation from given EM fields. Professor Martin Uman from the University of Florida has been characterizing and defining extreme lightning strikes. Using Professor Uman's research, Dr. Kimball Merewether at Sandia National Laboratory in Albuquerque calculated the EM fields inside a Faraday-cage type facility, when the facility is struck by lightning. In the following examples we will use Dr. Merewether's calculations from a poor quality Faraday cage as the input for the RF coupling analysis. coupling of radio frequency (RF) energy to explosive components is an indirect effect of currents [1]. If HE is adequately separated from the walls of the facility that is struck by disassembled have been turned into Faraday-cage structures to protect against lightning is initiation of the HE. last couple of decades, DOE facilities where HE is manufactured, assembled, stored or lightning. The most sensitive component is typically a detonator, and the safety concern lightning, electrons discharged from the clouds should not reach the HE components. radio receiver, the metal cable of a detonator can extract energy from the EM fields. This to the earth will create electromagnetic (EM) fields in the facility. Like an antenna in a« less

  13. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    NASA Astrophysics Data System (ADS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.

  14. Guiding principles of USGS methodology for assessment of undiscovered conventional oil and gas resources

    USGS Publications Warehouse

    Charpentier, R.R.; Klett, T.R.

    2005-01-01

    During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.

  15. Comparison of quantitatively analyzed dynamic area-detector CT using various mathematic methods with FDG PET/CT in management of solitary pulmonary nodules.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro

    2013-06-01

    The objective of our study was to prospectively compare the capability of dynamic area-detector CT analyzed with different mathematic methods and PET/CT in the management of pulmonary nodules. Fifty-two consecutive patients with 96 pulmonary nodules underwent dynamic area-detector CT, PET/CT, and microbacterial or pathologic examinations. All nodules were classified into the following groups: malignant nodules (n = 57), benign nodules with low biologic activity (n = 15), and benign nodules with high biologic activity (n = 24). On dynamic area-detector CT, the total, pulmonary arterial, and systemic arterial perfusions were calculated using the dual-input maximum slope method; perfusion was calculated using the single-input maximum slope method; and extraction fraction and blood volume (BV) were calculated using the Patlak plot method. All indexes were statistically compared among the three nodule groups. Then, receiver operating characteristic analyses were used to compare the diagnostic capabilities of the maximum standardized uptake value (SUVmax) and each perfusion parameter having a significant difference between malignant and benign nodules. Finally, the diagnostic performances of the indexes were compared by means of the McNemar test. No adverse effects were observed in this study. All indexes except extraction fraction and BV, both of which were calculated using the Patlak plot method, showed significant differences among the three groups (p < 0.05). Areas under the curve of total perfusion calculated using the dual-input method, pulmonary arterial perfusion calculated using the dual-input method, and perfusion calculated using the single-input method were significantly larger than that of SUVmax (p < 0.05). The accuracy of total perfusion (83.3%) was significantly greater than the accuracy of the other indexes: pulmonary arterial perfusion (72.9%, p < 0.05), systemic arterial perfusion calculated using the dual-input method (69.8%, p < 0.05), perfusion (66.7%, p < 0.05), and SUVmax (60.4%, p < 0.05). Dynamic area-detector CT analyzed using the dual-input maximum slope method has better potential for the diagnosis of pulmonary nodules than dynamic area-detector CT analyzed using other methods and than PET/CT.

  16. Recording polarization gratings with a standing spiral wave

    NASA Astrophysics Data System (ADS)

    Vernon, Jonathan P.; Serak, Svetlana V.; Hakobyan, Rafik S.; Aleksanyan, Artur K.; Tondiglia, Vincent P.; White, Timothy J.; Bunning, Timothy J.; Tabiryan, Nelson V.

    2013-11-01

    A scalable and robust methodology for writing cycloidal modulation patterns of optical axis orientation in photosensitive surface alignment layers is demonstrated. Counterpropagating circularly polarized beams, generated by reflection of the input beam from a cholesteric liquid crystal, direct local surface orientation in a photosensitive surface. Purposely introducing a slight angle between the input beam and the photosensitive surface normal introduces a grating period/orientation that is readily controlled and templated. The resulting cycloidal diffractive waveplates offer utility in technologies requiring diffraction over a broad range of angles/wavelengths. This simple methodology of forming polarization gratings offers advantages over conventional fabrication techniques.

  17. Estimating floodwater depths from flood inundation maps and topography

    USGS Publications Warehouse

    Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi

    2018-01-01

    Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.

  18. Users manual for updated computer code for axial-flow compressor conceptual design

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.

  19. Continued Development of a Global Heat Transfer Measurement System at AEDC Hypervelocity Wind Tunnel 9

    NASA Technical Reports Server (NTRS)

    Kurits, Inna; Lewis, M. J.; Hamner, M. P.; Norris, Joseph D.

    2007-01-01

    Heat transfer rates are an extremely important consideration in the design of hypersonic vehicles such as atmospheric reentry vehicles. This paper describes the development of a data reduction methodology to evaluate global heat transfer rates using surface temperature-time histories measured with the temperature sensitive paint (TSP) system at AEDC Hypervelocity Wind Tunnel 9. As a part of this development effort, a scale model of the NASA Crew Exploration Vehicle (CEV) was painted with TSP and multiple sequences of high resolution images were acquired during a five run test program. Heat transfer calculation from TSP data in Tunnel 9 is challenging due to relatively long run times, high Reynolds number environment and the desire to utilize typical stainless steel wind tunnel models used for force and moment testing. An approach to reduce TSP data into convective heat flux was developed, taking into consideration the conditions listed above. Surface temperatures from high quality quantitative global temperature maps acquired with the TSP system were then used as an input into the algorithm. Preliminary comparison of the heat flux calculated using the TSP surface temperature data with the value calculated using the standard thermocouple data is reported.

  20. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  1. Quantification of groundwater recharge in urban environments.

    PubMed

    Tubau, Isabel; Vázquez-Suñé, Enric; Carrera, Jesús; Valhondo, Cristina; Criollo, Rotman

    2017-08-15

    Groundwater management in urban areas requires a detailed knowledge of the hydrogeological system as well as the adequate tools for predicting the amount of groundwater and water quality evolution. In that context, a key difference between urban and natural areas lies in recharge evaluation. A large number of studies have been published since the 1990s that evaluate recharge in urban areas, with no specific methodology. Most of these methods show that there are generally higher rates of recharge in urban settings than in natural settings. Methods such as mixing ratios or groundwater modeling can be used to better estimate the relative importance of different sources of recharge and may prove to be a good tool for total recharge evaluation. However, accurate evaluation of this input is difficult. The objective is to present a methodology to help overcome those difficulties, and which will allow us to quantify the variability in space and time of the recharge into aquifers in urban areas. Recharge calculations have been initially performed by defining and applying some analytical equations, and validation has been assessed based on groundwater flow and solute transport modeling. This methodology is applicable to complex systems by considering temporal variability of all water sources. This allows managers of urban groundwater to evaluate the relative contribution of different recharge sources at a city scale by considering quantity and quality factors. The methodology is applied to the assessment of recharge sources in the Barcelona city aquifers. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The activity-based methodology to assess ship emissions - A review.

    PubMed

    Nunes, R A O; Alvim-Ferraz, M C M; Martins, F G; Sousa, S I V

    2017-12-01

    Several studies tried to estimate atmospheric emissions with origin in the maritime sector, concluding that it contributed to the global anthropogenic emissions through the emission of pollutants that have a strong impact on hu' health and also on climate change. Thus, this paper aimed to review published studies since 2010 that used activity-based methodology to estimate ship emissions, to provide a summary of the available input data. After exclusions, 26 articles were analysed and the main information were scanned and registered, namely technical information about ships, ships activity and movement information, engines, fuels, load and emission factors. The larger part of studies calculating in-port ship emissions concluded that the majority was emitted during hotelling and most of the authors allocating emissions by ship type concluded that containerships were the main pollutant emitters. To obtain technical information about ships the combined use of data from Lloyd's Register of Shipping database with other sources such as port authority's databases, engine manufactures and ship-owners seemed the best approach. The use of AIS data has been growing in recent years and seems to be the best method to report activities and movements of ships. To predict ship powers the Hollenbach (1998) method which estimates propelling power as a function of instantaneous speed based on total resistance and use of load balancing schemes for multi-engine installations seemed to be the best practices for more accurate ship emission estimations. For emission factors improvement, new on-board measurement campaigns or studies should be undertaken. Regardless of the effort that has been performed in the last years to obtain more accurate shipping emission inventories, more precise input data (technical information about ships, engines, load and emission factors) should be obtained to improve the methodology to develop global and universally accepted emission inventories for an effective environmental policy plan. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The Energetic Cost of Walking: A Comparison of Predictive Methods

    PubMed Central

    Kramer, Patricia Ann; Sylvester, Adam D.

    2011-01-01

    Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693

  4. The Euler’s Graphical User Interface Spreadsheet Calculator for Solving Ordinary Differential Equations by Visual Basic for Application Programming

    NASA Astrophysics Data System (ADS)

    Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila

    2017-08-01

    In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.

  5. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  6. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    NASA Astrophysics Data System (ADS)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  7. Numerical Calculation and Experiment of Coupled Dynamics of the Differential Velocity Vane Pump Driven by the Hybrid Higher-order Fourier Non-circular Gears

    NASA Astrophysics Data System (ADS)

    Xu, Gaohuan; Chen, Jianneng; Zhao, Huacheng

    2018-06-01

    The transmission systems of the differential velocity vane pumps (DVVP) have periodic vibrations under loads. And it is not easy to find the reason. In order to optimize the performance of the pump, the authors proposed DVVP driven by the hybrid Higher-order Fourier non-circular gears and tested it. There were also similar periodic vibrations and noises under loads. Taking into account this phenomenon, the paper proposes fluid mechanics and solid mechanics simulation methodology to analyze the coupling dynamics between fluid and transmission system and reveals the reason. The results show that the pump has the reverse drive phenomenon, which is that the blades drive the non-circular gears when the suction and discharge is alternating. The reverse drive phenomenon leads the sign of the shaft torque to be changed in positive and negative way. So the transmission system produces torsional vibrations. In order to confirm the simulation results, micro strains of the input shaft of the pump impeller are measured by the Wheatstone bridge and wireless sensor technology. The relationships between strain and torque are obtained by experimental calibration, and then the true torque of input shaft is calculated indirectly. The experimental results are consistent to the simulation results. It is proven that the periodic vibrations are mainly caused by fluid solid coupling, which leads to periodic torsional vibration of the transmission system.

  8. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    PubMed

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  9. Design of feedback control systems for stable plants with saturating actuators

    NASA Technical Reports Server (NTRS)

    Kapasouris, Petros; Athans, Michael; Stein, Gunter

    1988-01-01

    A systematic control design methodology is introduced for multi-input/multi-output stable open loop plants with multiple saturations. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way as to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of the methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated in the simulation of an academic example and the simulation of the multivariable longitudinal control of a modified model of the F-8 aircraft.

  10. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  11. The Logic of Evaluation.

    ERIC Educational Resources Information Center

    Welty, Gordon A.

    The logic of the evaluation of educational and other action programs is discussed from a methodological viewpoint. However, no attempt is made to develop methods of evaluating programs. In Part I, the structure of an educational program is viewed as a system with three components--inputs, transformation of inputs into outputs, and outputs. Part II…

  12. Inverse Thermal Analysis of Ti-6Al-4V Friction Stir Welds Using Numerical-Analytical Basis Functions with Pseudo-Advection

    NASA Astrophysics Data System (ADS)

    Lambrakos, S. G.

    2018-04-01

    Inverse thermal analysis of Ti-6Al-4V friction stir welds is presented that demonstrates application of a methodology using numerical-analytical basis functions and temperature-field constraint conditions. This analysis provides parametric representation of friction-stir-weld temperature histories that can be adopted as input data to computational procedures for prediction of solid-state phase transformations and mechanical response. These parameterized temperature histories can be used for inverse thermal analysis of friction stir welds having process conditions similar those considered here. Case studies are presented for inverse thermal analysis of friction stir welds that use three-dimensional constraint conditions on calculated temperature fields, which are associated with experimentally measured transformation boundaries and weld-stir-zone cross sections.

  13. Monte Carlo proton dose calculations using a radiotherapy specific dual-energy CT scanner for tissue segmentation and range assessment

    NASA Astrophysics Data System (ADS)

    Almeida, Isabel P.; Schyns, Lotte E. J. R.; Vaniqui, Ana; van der Heyden, Brent; Dedes, George; Resch, Andreas F.; Kamp, Florian; Zindler, Jaap D.; Parodi, Katia; Landry, Guillaume; Verhaegen, Frank

    2018-06-01

    Proton beam ranges derived from dual-energy computed tomography (DECT) images from a dual-spiral radiotherapy (RT)-specific CT scanner were assessed using Monte Carlo (MC) dose calculations. Images from a dual-source and a twin-beam DECT scanner were also used to establish a comparison to the RT-specific scanner. Proton ranges extracted from conventional single-energy CT (SECT) were additionally performed to benchmark against literature values. Using two phantoms, a DECT methodology was tested as input for GEANT4 MC proton dose calculations. Proton ranges were calculated for different mono-energetic proton beams irradiating both phantoms; the results were compared to the ground truth based on the phantom compositions. The same methodology was applied in a head-and-neck cancer patient using both SECT and dual-spiral DECT scans from the RT-specific scanner. A pencil-beam-scanning plan was designed, which was subsequently optimized by MC dose calculations, and differences in proton range for the different image-based simulations were assessed. For phantoms, the DECT method yielded overall better material segmentation with  >86% of the voxel correctly assigned for the dual-spiral and dual-source scanners, but only 64% for a twin-beam scanner. For the calibration phantom, the dual-spiral scanner yielded range errors below 1.2 mm (0.6% of range), like the errors yielded by the dual-source scanner (<1.1 mm, <0.5%). With the validation phantom, the dual-spiral scanner yielded errors below 0.8 mm (0.9%), whereas SECT yielded errors up to 1.6 mm (2%). For the patient case, where the absolute truth was missing, proton range differences between DECT and SECT were on average in  ‑1.2  ±  1.2 mm (‑0.5%  ±  0.5%). MC dose calculations were successfully performed on DECT images, where the dual-spiral scanner resulted in media segmentation and range accuracy as good as the dual-source CT. In the patient, the various methods showed relevant range differences.

  14. The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, Martin M.

    Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less

  15. Methodology for assessing quantities of water and proppant injection, and water production associated with development of continuous petroleum accumulations

    USGS Publications Warehouse

    Haines, Seth S.

    2015-07-13

    The quantities of water and hydraulic fracturing proppant required for producing petroleum (oil, gas, and natural gas liquids) from continuous accumulations, and the quantities of water extracted during petroleum production, can be quantitatively assessed using a probabilistic approach. The water and proppant assessment methodology builds on the U.S. Geological Survey methodology for quantitative assessment of undiscovered technically recoverable petroleum resources in continuous accumulations. The U.S. Geological Survey assessment methodology for continuous petroleum accumulations includes fundamental concepts such as geologically defined assessment units, and probabilistic input values including well-drainage area, sweet- and non-sweet-spot areas, and success ratio within the untested area of each assessment unit. In addition to petroleum-related information, required inputs for the water and proppant assessment methodology include probabilistic estimates of per-well water usage for drilling, cementing, and hydraulic-fracture stimulation; the ratio of proppant to water for hydraulic fracturing; the percentage of hydraulic fracturing water that returns to the surface as flowback; and the ratio of produced water to petroleum over the productive life of each well. Water and proppant assessments combine information from recent or current petroleum assessments with water- and proppant-related input values for the assessment unit being studied, using Monte Carlo simulation, to yield probabilistic estimates of the volume of water for drilling, cementing, and hydraulic fracture stimulation; the quantity of proppant for hydraulic fracture stimulation; and the volumes of water produced as flowback shortly after well completion, and produced over the life of the well.

  16. 40 CFR 600.113-12 - Fuel economy, CO2 emissions, and carbon-related exhaust emission calculations for FTP, HFET, US06...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... economy and carbon-related exhaust emission values require input of the weighted grams/mile values for... the calculations of the carbon-related exhaust emissions require the input of grams/mile values for... as follows: (1) Calculate the weighted grams/mile values for the FTP test for CO2, HC, and CO, and...

  17. 40 CFR 600.113-12 - Fuel economy, CO2 emissions, and carbon-related exhaust emission calculations for FTP, HFET, US06...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... economy and carbon-related exhaust emission values require input of the weighted grams/mile values for... the calculations of the carbon-related exhaust emissions require the input of grams/mile values for... as follows: (1) Calculate the weighted grams/mile values for the FTP test for CO2, HC, and CO, and...

  18. 40 CFR 600.113-12 - Fuel economy, CO2 emissions, and carbon-related exhaust emission calculations for FTP, HFET, US06...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... economy and carbon-related exhaust emission values require input of the weighted grams/mile values for... the calculations of the carbon-related exhaust emissions require the input of grams/mile values for... as follows: (1) Calculate the weighted grams/mile values for the FTP test for CO2, HC, and CO, and...

  19. Radioactive waste disposal fees-Methodology for calculation

    NASA Astrophysics Data System (ADS)

    Bemš, Július; Králík, Tomáš; Kubančák, Ján; Vašíček, Jiří; Starý, Oldřich

    2014-11-01

    This paper summarizes the methodological approach used for calculation of fee for low- and intermediate-level radioactive waste disposal and for spent fuel disposal. The methodology itself is based on simulation of cash flows related to the operation of system for waste disposal. The paper includes demonstration of methodology application on the conditions of the Czech Republic.

  20. 42 CFR 484.230 - Methodology used for the calculation of the low-utilization payment adjustment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Methodology used for the calculation of the low... Prospective Payment System for Home Health Agencies § 484.230 Methodology used for the calculation of the low... amount is determined by using cost data set forth in § 484.210(a) and adjusting by the appropriate wage...

  1. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  2. Peru Water Resources: Integrating NASA Earth Observations into Water Resource Planning and Management in Perus La Libertad Region

    NASA Technical Reports Server (NTRS)

    Padgett-Vasquez, Steve; Steentofte, Catherine; Holbrook, Abigail

    2014-01-01

    Developing countries often struggle with providing water security and sanitation services to their populations. An important aspect of improving security and sanitation is developing a comprehensive understanding of the country's water budget. Water For People, a non-profit organization dedicated to providing clean drinking water, is working with the Peruvian government to develop a water budget for the La Libertad region of Peru which includes the creation of an extensive watershed management plan. Currently, the data archive of the necessary variables to create the water management plan is extremely limited. Implementing NASA Earth observations has bolstered the dataset being used by Water For People, and the METRIC (Mapping EvapoTranspiration at High Resolution and Internalized Calibration) model has allowed for the estimation of the evapotranspiration values for the region. Landsat 8 imagery and the DEM (Digital Elevation Model) from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor onboard Terra were used to derive the land cover information, and were used in conjunction with local weather data of Cascas from Peru's National Meteorological and Hydrological Service (SENAMHI). Python was used to combine input variables and METRIC model calculations to approximate the evapotranspiration values for the Ochape sub-basin of the Chicama River watershed. Once calculated, the evapotranspiration values and methodology were shared Water For People to help supplement their decision support tools in the La Libertad region of Peru and potentially apply the methodology in other areas of need.

  3. Modeling the low-velocity impact characteristics of woven glass epoxy composite laminates using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Mathivanan, N. Rajesh; Mouli, Chandra

    2012-12-01

    In this work, a new methodology based on artificial neural networks (ANN) has been developed to study the low-velocity impact characteristics of woven glass epoxy laminates of EP3 grade. To train and test the networks, multiple impact cases have been generated using statistical analysis of variance (ANOVA). Experimental tests were performed using an instrumented falling-weight impact-testing machine. Different impact velocities and impact energies on different thicknesses of laminates were considered as the input parameters of the ANN model. This model is a feed-forward back-propagation neural network. Using the input/output data of the experiments, the model was trained and tested. Further, the effects of the low-velocity impact response of the laminates at different energy levels were investigated by studying the cause-effect relationship among the influential factors using response surface methodology. The most significant parameter is determined from the other input variables through ANOVA.

  4. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  5. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  6. An Automated Program Testing Methodology and its Implementation.

    DTIC Science & Technology

    1980-01-01

    correctly on its input data; the number of for each software system. asse rtions violated defines an "error function"Itiimoan tocos ecsswhh over the Input...space of the program. ThisItiimoan tocos tecse whh remove@ the need to examine a program’s output uncover errors early in the development cycle, in

  7. 77 FR 46790 - Environmental Impact Statement: Dane County, WI

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-06

    ... input on the project's Coordination Plan (CP) and Impact Assessment Methodology (IAM) were afforded to.... Public input was obtained on the draft project CP and IAM plan at the October 2007 Public Information Meeting (PIM). The completed project CP and IAM plan was issued in October 2008. A follow-up Agency...

  8. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less

  9. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  10. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  11. Development and application of a mechanistic model to estimate emission of nitrous oxide from UK agriculture

    NASA Astrophysics Data System (ADS)

    Brown, L.; Syed, B.; Jarvis, S. C.; Sneath, R. W.; Phillips, V. R.; Goulding, K. W. T.; Li, C.

    A mechanistic model of N 2O emission from agricultural soil (DeNitrification-DeComposition—DNDC) was modified for application to the UK, and was used as the basis of an inventory of N 2O emission from UK agriculture in 1990. UK-specific input data were added to DNDC's database and the ability to simulate daily C and N inputs from grazing animals and applied animal waste was added to the model. The UK version of the model, UK-DNDC, simulated emissions from 18 different crop types on the 3 areally dominant soils in each county. Validation of the model at the field scale showed that predictions matched observations well. Emission factors for the inventory were calculated from estimates of N 2O emission from UK-DNDC, in order to maintain direct comparability with the IPCC approach. These, along with activity data, were included in a transparent spreadsheet format. Using UK-DNDC, the estimate of N 2O-N emission from UK current agricultural practice in 1990 was 50.9 Gg. This total comprised 31.7 Gg from the soil sector, 5.9 Gg from animals and 13.2 Gg from the indirect sector. The range of this estimate (using the range of soil organic C for each soil used) was 30.5-62.5 Gg N. Estimates of emissions in each sector were compared to those calculated using the IPCC default methodology. Emissions from the soil and indirect sectors were smaller with the UK-DNDC approach than with the IPCC methodology, while emissions from the animal sector were larger. The model runs suggested a relatively large emission from agricultural land that was not attributable to current agricultural practices (33.8 Gg in total, 27.4 Gg from the soil sector). This 'background' component is partly the result of historical agricultural land use. It is not normally included in inventories of emission, but would increase the total emission of N 2O-N from agricultural land in 1990 to 78.3 Gg.

  12. Design for performance enhancement in feedback control systems with multiple saturating nonlinearities. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kapasouris, Petros

    1988-01-01

    A systematic control design methodology is introduced for multi-input/multi-output systems with multiple saturations. The methodology can be applied to stable and unstable open loop plants with magnitude and/or rate control saturations and to systems in which state limitations are desired. This new methodology is a substantial improvement over previous heuristic single-input/single-output approaches. The idea is to introduce a supervisor loop so that when the references and/or disturbances are sufficiently small, the control system operates linearly as designed. For signals large enough to cause saturations, the control law is modified in such a way to ensure stability and to preserve, to the extent possible, the behavior of the linear control design. Key benefits of this methodology are: the modified compensator never produces saturating control signals, integrators and/or slow dynamics in the compensator never windup, the directional properties of the controls are maintained, and the closed loop system has certain guaranteed stability properties. The advantages of the new design methodology are illustrated by numerous simulations, including the multivariable longitudinal control of modified models of the F-8 (stable) and F-16 (unstable) aircraft.

  13. Methodology for Calculating Cost-per-Mile for Current and Future Vehicle Powertrain Technologies, with Projections to 2024

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timbario, Thomas A.; Timbario, Thomas J.; Laffen, Melissa J.

    2011-04-12

    Currently, several cost-per-mile calculators exist that can provide estimates of acquisition and operating costs for consumers and fleets. However, these calculators are limited in their ability to determine the difference in cost per mile for consumer versus fleet ownership, to calculate the costs beyond one ownership period, to show the sensitivity of the cost per mile to the annual vehicle miles traveled (VMT), and to estimate future increases in operating and ownership costs. Oftentimes, these tools apply a constant percentage increase over the time period of vehicle operation, or in some cases, no increase in direct costs at all overmore » time. A more accurate cost-per-mile calculator has been developed that allows the user to analyze these costs for both consumers and fleets. Operating costs included in the calculation tool include fuel, maintenance, tires, and repairs; ownership costs include insurance, registration, taxes and fees, depreciation, financing, and tax credits. The calculator was developed to allow simultaneous comparisons of conventional light-duty internal combustion engine (ICE) vehicles, mild and full hybrid electric vehicles (HEVs), and fuel cell vehicles (FCVs). Additionally, multiple periods of operation, as well as three different annual VMT values for both the consumer case and fleets can be investigated to the year 2024. These capabilities were included since today's “cost to own” calculators typically include the ability to evaluate only one VMT value and are limited to current model year vehicles. The calculator allows the user to select between default values or user-defined values for certain inputs including fuel cost, vehicle fuel economy, manufacturer's suggested retail price (MSRP) or invoice price, depreciation and financing rates.« less

  14. Diagnostic methodology for incipient system disturbance based on a neural wavelet approach

    NASA Astrophysics Data System (ADS)

    Won, In-Ho

    Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.

  15. Comparing Methodologies for Evaluating Emergency Medical Services Ground Transport Access to Time-critical Emergency Services: A Case Study Using Trauma Center Care.

    PubMed

    Doumouras, Aristithes G; Gomez, David; Haas, Barbara; Boyes, Donald M; Nathens, Avery B

    2012-09-01

    The regionalization of medical services has resulted in improved outcomes and greater compliance with existing guidelines. For certain "time-critical" conditions intimately associated with emergency medicine, early intervention has demonstrated mortality benefits. For these conditions, then, appropriate triage within a regionalized system at first diagnosis is paramount, ideally occurring in the field by emergency medical services (EMS) personnel. Therefore, EMS ground transport access is an important metric in the ongoing evaluation of a regionalized care system for time-critical emergency services. To our knowledge, no studies have demonstrated how methodologies for calculating EMS ground transport access differ in their estimates of access over the same study area for the same resource. This study uses two methodologies to calculate EMS ground transport access to trauma center care in a single study area to explore their manifestations and critically evaluate the differences between the methodologies. Two methodologies were compared in their estimations of EMS ground transport access to trauma center care: a routing methodology (RM) and an as-the-crow-flies methodology (ACFM). These methodologies were adaptations of the only two methodologies that had been previously used in the literature to calculate EMS ground transport access to time-critical emergency services across the United States. The RM and ACFM were applied to the nine Level I and Level II trauma centers within the province of Ontario by creating trauma center catchment areas at 30, 45, 60, and 120 minutes and calculating the population and area encompassed by the catchments. Because the methodologies were identical for measuring air access, this study looks specifically at EMS ground transport access. Catchments for the province were created for each methodology at each time interval, and their populations and areas were significantly different at all time periods. Specifically, the RM calculated significantly larger populations at every time interval while the ACFM calculated larger catchment area sizes. This trend is counterintuitive (i.e., larger catchment should mean higher populations), and it was found to be most disparate at the shortest time intervals (under 60 minutes). Through critical evaluation of the differences, the authors elucidated that the ACFM could calculate road access in areas with no roads and overestimates access in low-density areas compared to the RM, potentially affecting delivery of care decisions. Based on these results, the authors believe that future methodologies for calculating EMS ground transport access must incorporate a continuous and valid route through the road network as well as use travel speeds appropriate to the road segments traveled; alternatively, we feel that variation in methods for calculating road distances would have little effect on realized access. Overall, as more complex models for calculating EMS ground transport access become used, there needs to be a standard methodology to improve and to compare it to. Based on these findings, the authors believe that this should be the RM. © 2012 by the Society for Academic Emergency Medicine.

  16. 76 FR 59896 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Postponement of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Wage Rule revised the methodology by which we calculate the prevailing wages to be paid to H-2B workers... methodology by which we calculate the prevailing wages to be paid to H-2B workers and United States (U.S... concerning the calculation of the prevailing wage rate in the H-2B program. CATA v. Solis, Dkt. No. 103-1...

  17. New methodology for fast prediction of wheel wear evolution

    NASA Astrophysics Data System (ADS)

    Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.

    2017-07-01

    In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.

  18. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  19. Calculation of the radiative properties of photosynthetic microorganisms

    NASA Astrophysics Data System (ADS)

    Dauchet, Jérémi; Blanco, Stéphane; Cornet, Jean-François; Fournier, Richard

    2015-08-01

    A generic methodological chain for the predictive calculation of the light-scattering and absorption properties of photosynthetic microorganisms within the visible spectrum is presented here. This methodology has been developed in order to provide the radiative properties needed for the analysis of radiative transfer within photobioreactor processes, with a view to enable their optimization for large-scale sustainable production of chemicals for energy and chemistry. It gathers an electromagnetic model of light-particle interaction along with detailed and validated protocols for the determination of input parameters: morphological and structural characteristics of the studied microorganisms as well as their photosynthetic-pigment content. The microorganisms are described as homogeneous equivalent-particles whose shape and size distribution is characterized by image analysis. The imaginary part of their refractive index is obtained thanks to a new and quite extended database of the in vivo absorption spectra of photosynthetic pigments (that is made available to the reader). The real part of the refractive index is then calculated by using the singly subtractive Kramers-Krönig approximation, for which the anchor point is determined with the Bruggeman mixing rule, based on the volume fraction of the microorganism internal-structures and their refractive indices (extracted from a database). Afterwards, the radiative properties are estimated using the Schiff approximation for spheroidal or cylindrical particles, as a first step toward the description of the complexity and diversity of the shapes encountered within the microbial world. Finally, these predictive results are confronted to experimental normal-hemispherical transmittance spectra for validation. This entire procedure is implemented for Rhodospirillum rubrum, Arthrospira platensis and Chlamydomonas reinhardtii, each representative of the main three kinds of photosynthetic microorganisms, i.e. respectively photosynthetic bacteria, cyanobacteria and eukaryotic microalgae. The obtained results are in very good agreement with the experimental measurements when the shape of the microorganisms is well described (in comparison to the standard volume-equivalent sphere approximation). As a main perspective, the consideration of the helical shape of Arthrospira platensis appears to be a key to an accurate estimation of its radiative properties. On the whole, the presented methodological chain also appears of great interest for other scientific communities such as atmospheric science, oceanography, astrophysics and engineering.

  20. Environmental flow allocation and statistics calculator

    USGS Publications Warehouse

    Konrad, Christopher P.

    2011-01-01

    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  1. Determination and Applications of Environmental Costs at Different Sized Airports: Aircraft Noise and Engine Emissions

    NASA Technical Reports Server (NTRS)

    Lu, Cherie; Lierens, Abigail

    2003-01-01

    With the increasing trend of charging for externalities and the aim of encouraging the sustainable development of the air transport industry, there is a need to evaluate the social costs of these undesirable side effects, mainly aircraft noise and engine emissions, for different airports. The aircraft noise and engine emissions social costs are calculated in monetary terms for five different airports, ranging from hub airports to small regional airports. The number of residences within different levels of airport noise contours and the aircraft noise classifications are the main determinants for accessing aircraft noise social costs. Whist, based on the damages of different engine pollutants on the human health, vegetation, materials, aquatic ecosystem and climate, the aircraft engine emissions social costs vary from engine types to aircraft categories. The results indicate that the relationship appears to be curvilinear between environmental costs and the traffic volume of an airport. The results and methodology of environmental cost calculation could input for to the proposed European wide harmonized noise charges as well as the social cost benefit analysis of airports.

  2. On-field study of anaerobic digestion full-scale plants (part I): an on-field methodology to determine mass, carbon and nutrients balance.

    PubMed

    Schievano, Andrea; D'Imporzano, Giuliana; Salati, Silvia; Adani, Fabrizio

    2011-09-01

    The mass balance (input/output mass flows) of full-scale anaerobic digestion (AD) processes should be known for a series of purposes, e.g. to understand carbon and nutrients balances, to evaluate the contribution of AD processes to elemental cycles, especially when digestates are applied to agricultural land and to measure the biodegradation yields and the process efficiency. In this paper, three alternative methods were studied, to determine the mass balance in full-scale processes, discussing their reliability and applicability. Through a 1-year survey on three full-scale AD plants and through 38 laboratory-scale batch digesters, the congruency of the considered methods was demonstrated and a linear equation was provided that allows calculating the wet weight losses (WL) from the methane produced (MP) by the plant (WL=41.949*MP+20.853, R(2)=0.950, p<0.01). Additionally, this new tool was used to calculate carbon, nitrogen, phosphorous and potassium balances of the three observed AD plants. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Casper, Jay H.

    2005-01-01

    The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.

  4. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<

  5. A new methodology for estimating nuclear casualties as a function of time.

    PubMed

    Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A

    2011-09-01

    The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.

  6. Development and testing of methodology for evaluating the performance of multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Polotzky, Anthony S.; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1990-01-01

    The development of a controller performance evaluation (CPE) methodology for multiinput/multioutput digital control systems is described. The equations used to obtain the open-loop plant, controller transfer matrices, and return-difference matrices are given. Results of applying the CPE methodology to evaluate MIMO digital flutter suppression systems being tested on an active flexible wing wind-tunnel model are presented to demonstrate the CPE capability.

  7. Calculation of the final energy demand for the Federal Republic of Germany with the simulation model MEDEE-2

    NASA Astrophysics Data System (ADS)

    Loeffler, U.; Weible, H.

    1981-08-01

    The final energy demand for the Federal Republic of Germany was calculated. The model MEDEE-2 describes, in relationship to a given distribution of the production of single industrial sectors, of energy specific values and of population development, the final energy consumption of the domestic, service industry and transportation sectors for a given region. The input data, consisting of constants and variables, and the proceeding, by which the projections for the input data of single sectors are performed, are discussed. The results of the calculations are presented and are compared. The sensitivity of single results in relation to the variation of input values is analyzed.

  8. Online Airtightness Calculator for the US, Canada and China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    The contribution of air leakage to heating and cooling loads has been increasing as the thermal resistance of building envelopes continues to improve. Easy-to-access data are needed to convince building owners and contractors that enhancing the airtightness of buildings is the next logical step to achieve a high-performance building envelope. To this end, Oak Ridge National Laboratory, the National Institute of Standards and Technology, the Air Barrier Association of America, and the US-China Clean Energy Research Center for Building Energy Efficiency Consortium partnered to develop an online calculator that estimates the potential energy savings in major US, Canadian, and Chinesemore » cities due to improvements in airtightness. This tool will have user-friendly graphical interface that uses a database of CONTAM-EnergyPlus pre-run simulation results, and will be available to the public at no cost. Baseline leakage rates are either user-specified or the user selects them from the supplied typical leakage rates. Users will enter the expected airtightness after the proper installation of an air barrier system. Energy costs are estimated based on the building location and inputs from users. This paper provides an overview of the methodology that is followed in this calculator, as well as results from an example. The successful deployment of this calculator could influence construction practices so that greenhouse gas emissions from the US, Canada, and China are significantly curtailed.« less

  9. 'Trust but verify'--five approaches to ensure safe medical apps.

    PubMed

    Wicks, Paul; Chiauzzi, Emil

    2015-09-25

    Mobile health apps are health and wellness programs available on mobile devices such as smartphones or tablets. In three systematic assessments published in BMC Medicine, Huckvale and colleagues demonstrate that widely available health apps meant to help patients calculate their appropriate insulin dosage, educate themselves about asthma, or perform other important functions are methodologically weak. Insulin dose calculators lacked user input validation and made inappropriate dose recommendations, with a lack of documentation throughout. Since 2011, asthma apps have become more interactive, but have not improved in quality; peak flow calculators have the same issues as the insulin calculators. A review of the accredited National Health Service Health Apps Library found poor and inconsistent implementation of privacy and security, with 28% of apps lacking a privacy policy and one even transmitting personally identifying data the policy claimed would be anonymous. Ensuring patient safety might require a new approach, whether that be a consumer education program at one extreme or government regulation at the other. App store owners could ensure transparency of algorithms (whiteboxing), data sharing, and data quality. While a proper balance must be struck between innovation and caution, patient safety must be paramount.Please see related articles: http://dx.doi.org/10.1186/s12916-015-0444-y , http://www.biomedcentral.com/1741-7015/13/106 and http://www.biomedcentral.com/1741-7015/13/58.

  10. Extended H2 synthesis for multiple degree-of-freedom controllers

    NASA Technical Reports Server (NTRS)

    Hampton, R. David; Knospe, Carl R.

    1992-01-01

    H2 synthesis techniques are developed for a general multiple-input-multiple-output (MIMO) system subject to both stochastic and deterministic disturbances. The H2 synthesis is extended by incorporation of anticipated disturbances power-spectral-density information into the controller-design process, as well as by frequency weightings of generalized coordinates and control inputs. The methodology is applied to a simple single-input-multiple-output (SIMO) problem, analogous to the type of vibration isolation problem anticipated in microgravity research experiments.

  11. In Vivo Patellofemoral Contact Mechanics During Active Extension Using a Novel Dynamic MRI-based Methodology

    PubMed Central

    Borotikar, Bhushan S.; Sheehan, Frances T.

    2017-01-01

    Objectives To establish an in vivo, normative patellofemoral cartilage contact mechanics database acquired during voluntary muscle control using a novel dynamic magnetic resonance (MR) imaging-based computational methodology and validate the contact mechanics sensitivity to the known sub-millimeter methodological inaccuracies. Design Dynamic cine phase-contrast and multi-plane cine images were acquired while female subjects (n=20, sample of convenience) performed an open kinetic chain (knee flexion-extension) exercise inside a 3-Tesla MR scanner. Static cartilage models were created from high resolution three-dimensional static MR data and accurately placed in their dynamic pose at each time frame based on the cine-PC data. Cartilage contact parameters were calculated based on the surface overlap. Statistical analysis was performed using paired t-test and a one-sample repeated measures ANOVA. The sensitivity of the contact parameters to the known errors in the patellofemoral kinematics was determined. Results Peak mean patellofemoral contact area was 228.7±173.6mm2 at 40° knee angle. During extension, contact centroid and peak strain locations tracked medially on the femoral and patellar cartilage and were not significantly different from each other. At 30°, 35°, and 40° of knee extension, contact area was significantly different. Contact area and centroid locations were insensitive to rotational and translational perturbations. Conclusion This study is a first step towards unfolding the biomechanical pathways to anterior patellofemoral pain and OA using dynamic, in vivo, and accurate methodologies. The database provides crucial data for future studies and for validation of, or as an input to, computational models. PMID:24012620

  12. Uncovering Productive Morphosyntax in French-Learning Toddlers: A Multidimensional Methodology Perspective

    ERIC Educational Resources Information Center

    Barriére, Isabelle; Goyet, Louise; Kresh, Sarah; Legendre, Géraldine; Nazzi, Thierry

    2016-01-01

    The present study applies a multidimensional methodological approach to the study of the acquisition of morphosyntax. It focuses on evaluating the degree of productivity of an infrequent subject-verb agreement pattern in the early acquisition of French and considers the explanatory role played by factors such as input frequency, semantic…

  13. New dual in-growth core isotopic technique to assess the root litter carbon input to the soil

    USDA-ARS?s Scientific Manuscript database

    The root-derived carbon (C) input to the soil, whose quantification is often neglected because of methodological difficulties, is considered a crucial C flux for soil C dynamics and net ecosystem productivity (NEP) studies. In the present study, we compared two independent methods to quantify this C...

  14. Inferring heuristic classification hierarchies from natural language input

    NASA Technical Reports Server (NTRS)

    Hull, Richard; Gomez, Fernando

    1993-01-01

    A methodology for inferring hierarchies representing heuristic knowledge about the check out, control, and monitoring sub-system (CCMS) of the space shuttle launch processing system from natural language input is explained. Our method identifies failures explicitly and implicitly described in natural language by domain experts and uses those descriptions to recommend classifications for inclusion in the experts' heuristic hierarchies.

  15. Multiobjective Optimization of Atmospheric Plasma Spray Process Parameters to Deposit Yttria-Stabilized Zirconia Coatings Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.

    2011-03-01

    Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.

  16. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  17. Fuel and Carbon Dioxide Emissions Savings Calculation Methodology for Combined Heat and Power Systems

    EPA Pesticide Factsheets

    This paper provides the EPA Combined Heat and Power Partnership's recommended methodology for calculating fuel and carbon dioxide emissions savings from CHP compared to SHP, which serves as the basis for the EPA's CHP emissions calculator.

  18. Optimization of a GO2/GH2 Swirl Coaxial Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    1999-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.

  19. TyPol - a new methodology for organic compounds clustering based on their molecular characteristics and environmental behavior.

    PubMed

    Servien, Rémi; Mamy, Laure; Li, Ziang; Rossard, Virginie; Latrille, Eric; Bessac, Fabienne; Patureau, Dominique; Benoit, Pierre

    2014-09-01

    Following legislation, the assessment of the environmental risks of 30000-100000 chemical substances is required for their registration dossiers. However, their behavior in the environment and their transfer to environmental components such as water or atmosphere are studied for only a very small proportion of the chemical in laboratory tests or monitoring studies because it is time-consuming and/or cost prohibitive. Therefore, the objective of this work was to develop a new methodology, TyPol, to classify organic compounds, and their degradation products, according to both their behavior in the environment and their molecular properties. The strategy relies on partial least squares analysis and hierarchical clustering. The calculation of molecular descriptors is based on an in silico approach, and the environmental endpoints (i.e. environmental parameters) are extracted from several available databases and literature. The classification of 215 organic compounds inputted in TyPol for this proof-of-concept study showed that the combination of some specific molecular descriptors could be related to a particular behavior in the environment. TyPol also provided an analysis of similarities (or dissimilarities) between organic compounds and their degradation products. Among the 24 degradation products that were inputted, 58% were found in the same cluster as their parents. The robustness of the method was tested and shown to be good. TyPol could help to predict the environmental behavior of a "new" compound (parent compound or degradation product) from its affiliation to one cluster, but also to select representative substances from a large data set in order to answer some specific questions regarding their behavior in the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. The energetic cost of walking: a comparison of predictive methods.

    PubMed

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.

  1. CROSSER - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.

  2. Development of Web tools to predict axillary lymph node metastasis and pathological response to neoadjuvant chemotherapy in breast cancer patients.

    PubMed

    Sugimoto, Masahiro; Takada, Masahiro; Toi, Masakazu

    2014-12-09

    Nomograms are a standard computational tool to predict the likelihood of an outcome using multiple available patient features. We have developed a more powerful data mining methodology, to predict axillary lymph node (AxLN) metastasis and response to neoadjuvant chemotherapy (NAC) in primary breast cancer patients. We developed websites to use these tools. The tools calculate the probability of AxLN metastasis (AxLN model) and pathological complete response to NAC (NAC model). As a calculation algorithm, we employed a decision tree-based prediction model known as the alternative decision tree (ADTree), which is an analog development of if-then type decision trees. An ensemble technique was used to combine multiple ADTree predictions, resulting in higher generalization abilities and robustness against missing values. The AxLN model was developed with training datasets (n=148) and test datasets (n=143), and validated using an independent cohort (n=174), yielding an area under the receiver operating characteristic curve (AUC) of 0.768. The NAC model was developed and validated with n=150 and n=173 datasets from a randomized controlled trial, yielding an AUC of 0.787. AxLN and NAC models require users to input up to 17 and 16 variables, respectively. These include pathological features, including human epidermal growth factor receptor 2 (HER2) status and imaging findings. Each input variable has an option of "unknown," to facilitate prediction for cases with missing values. The websites developed facilitate the use of these tools, and serve as a database for accumulating new datasets.

  3. A computational study of photo-induced electron transfer rate constants in subphthalocyanine/C60 organic photovoltaic materials via Fermi's golden rule

    NASA Astrophysics Data System (ADS)

    Lee, Myeong H.; Dunietz, Barry D.; Geva, Eitan

    2014-03-01

    We present a methodology to obtain the photo-induced electron transfer rate constant in organic photovoltaic (OPV) materials within the framework of Fermi's golden rule, using inputs obtained from first-principles electronic structure calculation. Within this approach, the nuclear vibrational modes are treated quantum-mechanically and a short-time approximation is avoided in contrast to the classical Marcus theory where these modes are treated classically within the high-temperature and short-time limits. We demonstrate our methodology on boron-subphthalocyanine-chloride/C60 OPV system to determine the rate constants of electron transfer and electron recombination processes upon photo-excitation. We consider two representative donor/acceptor interface configurations to investigate the effect of interface configuration on the charge transfer characteristics of OPV materials. In addition, we determine the time scale of excited states population by employing a master equation after obtaining the rate constants for all accessible electronic transitions. This work is pursued as part of the Center for Solar and Thermal Energy Conversion, an Energy Frontier Research Center funded by the US Department of Energy Office of Science, Office of Basic Energy Sciences under 390 Award No. DE-SC0000957.

  4. Use of Six Sigma Methodology to Reduce Appointment Lead-Time in Obstetrics Outpatient Department.

    PubMed

    Ortiz Barrios, Miguel A; Felizzola Jiménez, Heriberto

    2016-10-01

    This paper focuses on the issue of longer appointment lead-time in the obstetrics outpatient department of a maternal-child hospital in Colombia. Because of extended appointment lead-time, women with high-risk pregnancy could develop severe complications in their health status and put their babies at risk. This problem was detected through a project selection process explained in this article and to solve it, Six Sigma methodology has been used. First, the process was defined through a SIPOC diagram to identify its input and output variables. Second, six sigma performance indicators were calculated to establish the process baseline. Then, a fishbone diagram was used to determine the possible causes of the problem. These causes were validated with the aid of correlation analysis and other statistical tools. Later, improvement strategies were designed to reduce appointment lead-time in this department. Project results evidenced that average appointment lead-time reduced from 6,89 days to 4,08 days and the deviation standard dropped from 1,57 days to 1,24 days. In this way, the hospital will serve pregnant women faster, which represents a risk reduction of perinatal and maternal mortality.

  5. Adaptation to hydrological extremes through insurance: a financial fund simulation model under changing scenarios

    NASA Astrophysics Data System (ADS)

    Guzman, Diego; Mohor, Guilherme; Câmara, Clarissa; Mendiondo, Eduardo

    2017-04-01

    Researches from around the world relate global environmental changes with the increase of vulnerability to extreme events, such as heavy and scarce precipitations - floods and droughts. Hydrological disasters have caused increasing losses in recent years. Thus, risk transfer mechanisms, such as insurance, are being implemented to mitigate impacts, finance the recovery of the affected population, and promote the reduction of hydrological risks. However, among the main problems in implementing these strategies, there are: First, the partial knowledge of natural and anthropogenic climate change in terms of intensity and frequency; Second, the efficient risk reduction policies require accurate risk assessment, with careful consideration of costs; Third, the uncertainty associated with numerical models and input data used. The objective of this document is to introduce and discuss the feasibility of the application of Hydrological Risk Transfer Models (HRTMs) as a strategy of adaptation to global climate change. The article shows the development of a methodology for the collective and multi-sectoral vulnerability management, facing the hydrological risk in the long term, under an insurance funds simulator. The methodology estimates the optimized premium as a function of willingness to pay (WTP) and the potential direct loss derived from hydrological risk. The proposed methodology structures the watershed insurance scheme in three analysis modules. First, the hazard module, which characterizes the hydrologic threat from the recorded series input or modelled series under IPCC / RCM's generated scenarios. Second, the vulnerability module calculates the potential economic loss for each sector1 evaluated as a function of the return period "TR". Finally, the finance module determines the value of the optimal aggregate premium by evaluating equiprobable scenarios of water vulnerability; taking into account variables such as the maximum limit of coverage, deductible, reinsurance schemes, and incentives for risk reduction. The methodology tested by members of the Integrated Nucleus of River Basins (NIBH) (University of Sao Paulo (USP) School of Engineering of São Carlos (EESC) - Brazil) presents an alternative to the analysis and planning of insurance funds, aiming to mitigate the impacts of hydrological droughts and stream flash floods. The presented procedure is especially important when information relevant to studies and the development and implementation of insurance funds are difficult to access and of complex evaluation. A sequence of academic applications has been made in Brazil under the South American context, where the market of hydrological insurance has a low penetration compared to developed economies and insurance markets more established as the United States and Europe, producing relevant information and demonstrating the potential of the methodology in development.

  6. ASR4. Anelastic Strain Recovery Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warpinski, N.R.

    ASR4 is a nonlinear least-squares regression of Anelastic Strain Recovery (ASR) data for the purpose of determining in situ stress orientations and magnitudes. ASR4 fits the viscoelastic model of Warpinski and Teufel to measure ASR data, calculates the stress orientations directly, and stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and it calculates stress magnitudes using Blanton`s approach, assuming sufficient input data are available.

  7. Anelastic Strain Recovery Analysis Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ASR4 is a nonlinear least-squares regression of Anelastic Strain Recovery (ASR) data for the purpose of determining in situ stress orientations and magnitudes. ASR4 fits the viscoelastic model of Warpinski and Teufel to measure ASR data, calculates the stress orientations directly, and stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and it calculates stress magnitudes using Blanton''s approach, assuming sufficient input data are available.

  8. Methodology for calculating the volume of condensate droplets on topographically modified, microgrooved surfaces.

    PubMed

    Sommers, A D

    2011-05-03

    Liquid droplets on micropatterned surfaces consisting of parallel grooves tens of micrometers in width and depth are considered, and a method for calculating the droplet volume on these surfaces is presented. This model, which utilizes the elongated and parallel-sided nature of droplets condensed on these microgrooved surfaces, requires inputs from two droplet images at ϕ = 0° and ϕ = 90°--namely, the droplet major axis, minor axis, height, and two contact angles. In this method, a circular cross-sectional area is extruded the length of the droplet where the chord of the extruded circle is fixed by the width of the droplet. The maximum apparent contact angle is assumed to occur along the side of the droplet because of the surface energy barrier to wetting imposed by the grooves--a behavior that was observed experimentally. When applied to water droplets condensed onto a microgrooved aluminum surface, this method was shown to calculate the actual droplet volume to within 10% for 88% of the droplets analyzed. This method is useful for estimating the volume of retained droplets on topographically modified, anisotropic surfaces where both heat and mass transfer occur and the surface microchannels are aligned parallel to gravity to assist in condensate drainage.

  9. A computer program for calculating relative-transmissivity input arrays to aid model calibration

    USGS Publications Warehouse

    Weiss, Emanuel

    1982-01-01

    A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.

  10. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  11. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  12. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  13. A three-dimensional potential-flow program with a geometry package for input data generation

    NASA Technical Reports Server (NTRS)

    Halsey, N. D.

    1978-01-01

    Information needed to run a computer program for the calculation of the potential flow about arbitrary three dimensional lifting configurations is presented. The program contains a geometry package which greatly reduces the task of preparing the input data. Starting from a very sparse set of coordinate data, the program automatically augments and redistributes the coordinates, calculates curves of intersection between components, and redistributes coordinates in the regions adjacent to the intersection curves in a suitable manner for use in the potential flow calculations. A brief summary of the program capabilities and options is given, as well as detailed instructions for the data input, a suggested structure for the program overlay, and the output for two test cases.

  14. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...

  15. 42 CFR 413.337 - Methodology for calculating the prospective payment rates.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-STAGE RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.337 Methodology for calculating the...

  16. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  17. 76 FR 68385 - Approval and Promulgation of Implementation Plans; New Mexico; Albuquerque/Bernalillo County...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-04

    ... NMAC) addition of in subsections methodology for (A) and (B). fugitive dust control permits, revised... fee Fee Calculations and requirements for Procedures. fugitive dust control permits. 9/7/2004 Section... schedule based on acreage, add and update calculation methodology used to calculate non- programmatic dust...

  18. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.

    1989-01-01

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.

  19. Model documentation, Coal Market Module of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    This report documents the objectives and the conceptual and methodological approach used in the development of the National Energy Modeling System`s (NEMS) Coal Market Module (CMM) used to develop the Annual Energy Outlook 1998 (AEO98). This report catalogues and describes the assumptions, methodology, estimation techniques, and source code of CMM`s two submodules. These are the Coal Production Submodule (CPS) and the Coal Distribution Submodule (CDS). CMM provides annual forecasts of prices, production, and consumption of coal for NEMS. In general, the CDS integrates the supply inputs from the CPS to satisfy demands for coal from exogenous demand models. The internationalmore » area of the CDS forecasts annual world coal trade flows from major supply to major demand regions and provides annual forecasts of US coal exports for input to NEMS. Specifically, the CDS receives minemouth prices produced by the CPS, demand and other exogenous inputs from other NEMS components, and provides delivered coal prices and quantities to the NEMS economic sectors and regions.« less

  20. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    PubMed

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  1. A methodology for calculating transport emissions in cities with limited traffic data: Case study of diesel particulates and black carbon emissions in Murmansk.

    PubMed

    Kholod, N; Evans, M; Gusev, E; Yu, S; Malyshev, V; Tretyakova, S; Barinov, A

    2016-03-15

    This paper presents a methodology for calculating exhaust emissions from on-road transport in cities with low-quality traffic data and outdated vehicle registries. The methodology consists of data collection approaches and emission calculation methods. For data collection, the paper suggests using video survey and parking lot survey methods developed for the International Vehicular Emissions model. Additional sources of information include data from the largest transportation companies, vehicle inspection stations, and official vehicle registries. The paper suggests using the European Computer Programme to Calculate Emissions from Road Transport (COPERT) 4 model to calculate emissions, especially in countries that implemented European emissions standards. If available, the local emission factors should be used instead of the default COPERT emission factors. The paper also suggests additional steps in the methodology to calculate emissions only from diesel vehicles. We applied this methodology to calculate black carbon emissions from diesel on-road vehicles in Murmansk, Russia. The results from Murmansk show that diesel vehicles emitted 11.7 tons of black carbon in 2014. The main factors determining the level of emissions are the structure of the vehicle fleet and the level of vehicle emission controls. Vehicles without controls emit about 55% of black carbon emissions. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Development of a weight/sizing design synthesis computer program. Volume 1: Program formulation

    NASA Technical Reports Server (NTRS)

    Garrison, J. M.

    1973-01-01

    The development of a weight/sizing design synthesis methodology for use in support of the main line space shuttle program is discussed. The methodology has a minimum number of data inputs and quick turn around capabilities. The methodology makes it possible to: (1) make weight comparisons between current shuttle configurations and proposed changes, (2) determine the effects of various subsystems trades on total systems weight, and (3) determine the effects of weight on performance and performance on weight.

  3. Toward a More Efficient Implementation of Antifibrillation Pacing

    PubMed Central

    Wilson, Dan; Moehlis, Jeff

    2016-01-01

    We devise a methodology to determine an optimal pattern of inputs to synchronize firing patterns of cardiac cells which only requires the ability to measure action potential durations in individual cells. In numerical bidomain simulations, the resulting synchronizing inputs are shown to terminate spiral waves with a higher probability than comparable inputs that do not synchronize the cells as strongly. These results suggest that designing stimuli which promote synchronization in cardiac tissue could improve the success rate of defibrillation, and point towards novel strategies for optimizing antifibrillation pacing. PMID:27391010

  4. Application of Queueing Theory to the Analysis of Changes in Outpatients' Waiting Times in Hospitals Introducing EMR

    PubMed Central

    Cho, Kyoung Won; Kim, Seong Min; Chae, Young Moon

    2017-01-01

    Objectives This research used queueing theory to analyze changes in outpatients' waiting times before and after the introduction of Electronic Medical Record (EMR) systems. Methods We focused on the exact drawing of two fundamental parameters for queueing analysis, arrival rate (λ) and service rate (µ), from digital data to apply queueing theory to the analysis of outpatients' waiting times. We used outpatients' reception times and consultation finish times to calculate the arrival and service rates, respectively. Results Using queueing theory, we could calculate waiting time excluding distorted values from the digital data and distortion factors, such as arrival before the hospital open time, which occurs frequently in the initial stage of a queueing system. We analyzed changes in outpatients' waiting times before and after the introduction of EMR using the methodology proposed in this paper, and found that the outpatients' waiting time decreases after the introduction of EMR. More specifically, the outpatients' waiting times in the target public hospitals have decreased by rates in the range between 44% and 78%. Conclusions It is possible to analyze waiting times while minimizing input errors and limitations influencing consultation procedures if we use digital data and apply the queueing theory. Our results verify that the introduction of EMR contributes to the improvement of patient services by decreasing outpatients' waiting time, or by increasing efficiency. It is also expected that our methodology or its expansion could contribute to the improvement of hospital service by assisting the identification and resolution of bottlenecks in the outpatient consultation process. PMID:28261529

  5. Application of Queueing Theory to the Analysis of Changes in Outpatients' Waiting Times in Hospitals Introducing EMR.

    PubMed

    Cho, Kyoung Won; Kim, Seong Min; Chae, Young Moon; Song, Yong Uk

    2017-01-01

    This research used queueing theory to analyze changes in outpatients' waiting times before and after the introduction of Electronic Medical Record (EMR) systems. We focused on the exact drawing of two fundamental parameters for queueing analysis, arrival rate (λ) and service rate (µ), from digital data to apply queueing theory to the analysis of outpatients' waiting times. We used outpatients' reception times and consultation finish times to calculate the arrival and service rates, respectively. Using queueing theory, we could calculate waiting time excluding distorted values from the digital data and distortion factors, such as arrival before the hospital open time, which occurs frequently in the initial stage of a queueing system. We analyzed changes in outpatients' waiting times before and after the introduction of EMR using the methodology proposed in this paper, and found that the outpatients' waiting time decreases after the introduction of EMR. More specifically, the outpatients' waiting times in the target public hospitals have decreased by rates in the range between 44% and 78%. It is possible to analyze waiting times while minimizing input errors and limitations influencing consultation procedures if we use digital data and apply the queueing theory. Our results verify that the introduction of EMR contributes to the improvement of patient services by decreasing outpatients' waiting time, or by increasing efficiency. It is also expected that our methodology or its expansion could contribute to the improvement of hospital service by assisting the identification and resolution of bottlenecks in the outpatient consultation process.

  6. Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.

  7. Inverse analysis of turbidites by machine learning

    NASA Astrophysics Data System (ADS)

    Naruse, H.; Nakao, K.

    2017-12-01

    This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.

  8. Balancing the health workforce: breaking down overall technical change into factor technical change for labour-an empirical application to the Dutch hospital industry.

    PubMed

    Blank, Jos L T; van Hulst, Bart L

    2017-02-17

    Well-trained, well-distributed and productive health workers are crucial for access to high-quality, cost-effective healthcare. Because neither a shortage nor a surplus of health workers is wanted, policymakers use workforce planning models to get information on future labour markets and adjust policies accordingly. A neglected topic of workforce planning models is productivity growth, which has an effect on future demand for labour. However, calculating productivity growth for specific types of input is not as straightforward as it seems. This study shows how to calculate factor technical change (FTC) for specific types of input. The paper first theoretically derives FTCs from technical change in a consistent manner. FTC differs from a ratio of output and input, in that it deals with the multi-input, multi-output character of the production process in the health sector. Furthermore, it takes into account substitution effects between different inputs. An application of the calculation of FTCs is given for the Dutch hospital industry for the period 2003-2011. A translog cost function is estimated and used to calculate technical change and FTC for individual inputs, especially specific labour inputs. The results show that technical change increased by 2.8% per year in Dutch hospitals during 2003-2011. FTC differs amongst the various inputs. The FTC of nursing personnel increased by 3.2% per year, implying that fewer nurses were needed to let demand meet supply on the labour market. Sensitivity analyses show consistent results for the FTC of nurses. Productivity growth, especially of individual outputs, is a neglected topic in workforce planning models. FTC is a productivity measure that is consistent with technical change and accounts for substitution effects. An application to the Dutch hospital industry shows that the FTC of nursing personnel outpaced technical change during 2003-2011. The optimal input mix changed, resulting in fewer nurses being needed to let demand meet supply on the labour market. Policymakers should consider using more detailed and specific data on the nature of technical change when forecasting the future demand for health workers.

  9. Calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry assays and its application in supporting microdose absolute bioavailability studies.

    PubMed

    Gu, Huidong; Wang, Jian; Aubry, Anne-Françoise; Jiang, Hao; Zeng, Jianing; Easter, John; Wang, Jun-sheng; Dockens, Randy; Bifano, Marc; Burrell, Richard; Arnold, Mark E

    2012-06-05

    A methodology for the accurate calculation and mitigation of isotopic interferences in liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) assays and its application in supporting microdose absolute bioavailability studies are reported for the first time. For simplicity, this calculation methodology and the strategy to minimize the isotopic interference are demonstrated using a simple molecule entity, then applied to actual development drugs. The exact isotopic interferences calculated with this methodology were often much less than the traditionally used, overestimated isotopic interferences simply based on the molecular isotope abundance. One application of the methodology is the selection of a stable isotopically labeled internal standard (SIL-IS) for an LC-MS/MS bioanalytical assay. The second application is the selection of an SIL analogue for use in intravenous (i.v.) microdosing for the determination of absolute bioavailability. In the case of microdosing, the traditional approach of calculating isotopic interferences can result in selecting a labeling scheme that overlabels the i.v.-dosed drug or leads to incorrect conclusions on the feasibility of using an SIL drug and analysis by LC-MS/MS. The methodology presented here can guide the synthesis by accurately calculating the isotopic interferences when labeling at different positions, using different selective reaction monitoring (SRM) transitions or adding more labeling positions. This methodology has been successfully applied to the selection of the labeled i.v.-dosed drugs for use in two microdose absolute bioavailability studies, before initiating the chemical synthesis. With this methodology, significant time and cost saving can be achieved in supporting microdose absolute bioavailability studies with stable labeled drugs.

  10. Vocabulary Input from School Textbooks as a Potential Contributor to the Small Vocabulary Uptake Gained by English as a Foreign Language Learners in Saudi Arabia

    ERIC Educational Resources Information Center

    Alsaif, Abdullah; Milton, James

    2012-01-01

    Research repeatedly reports very little vocabulary uptake by English as a foreign language (EFL) learners in public schools in Saudi Arabia. Factors such as the teaching methodology employed and learner motivation have been suggested to explain this but a further explanation, the vocabulary input these learners receive, remains uninvestigated. An…

  11. A methodology for designing robust multivariable nonlinear control systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Grunberg, D. B.

    1986-01-01

    A new methodology is described for the design of nonlinear dynamic controllers for nonlinear multivariable systems providing guarantees of closed-loop stability, performance, and robustness. The methodology is an extension of the Linear-Quadratic-Gaussian with Loop-Transfer-Recovery (LQG/LTR) methodology for linear systems, thus hinging upon the idea of constructing an approximate inverse operator for the plant. A major feature of the methodology is a unification of both the state-space and input-output formulations. In addition, new results on stability theory, nonlinear state estimation, and optimal nonlinear regulator theory are presented, including the guaranteed global properties of the extended Kalman filter and optimal nonlinear regulators.

  12. Estimated anthropogenic nitrogen and phosphorus inputs to the land surface of the conterminous United States--1992, 1997, and 2002

    USGS Publications Warehouse

    Sprague, Lori A.; Gronberg, Jo Ann M.

    2013-01-01

    Anthropogenic inputs of nitrogen and phosphorus to each county in the conterminous United States and to the watersheds of 495 surface-water sites studied as part of the U.S. Geological Survey National Water-Quality Assessment Program were quantified for the years 1992, 1997, and 2002. Estimates of inputs of nitrogen and phosphorus from biological fixation by crops (for nitrogen only), human consumption, crop production for human consumption, animal production for human consumption, animal consumption, and crop production for animal consumption for each county are provided in a tabular dataset. These county-level estimates were allocated to the watersheds of the surface-water sites to estimate watershed-level inputs from the same sources; these estimates also are provided in a tabular dataset, together with calculated estimates of net import of food and net import of feed and previously published estimates of inputs from atmospheric deposition, fertilizer, and recoverable manure. The previously published inputs are provided for each watershed so that final estimates of total anthropogenic nutrient inputs could be calculated. Estimates of total anthropogenic inputs are presented together with previously published estimates of riverine loads of total nitrogen and total phosphorus for reference.

  13. Nuclear power plant life extension using subsize surveillance specimens. Performance report (4/15/92 - 4/14/98)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Arvind S.

    2001-03-05

    A new methodology to predict the Upper Shelf Energy (USE) of standard Charpy specimens (Full size) based on subsize specimens has been developed. The prediction methodology uses Finite Element Modeling (FEM) to model the fracture behavior. The inputs to FEM are the tensile properties of material and subsize Charpy specimen test data.

  14. Nonequilibrium air radiation (Nequair) program: User's manual

    NASA Technical Reports Server (NTRS)

    Park, C.

    1985-01-01

    A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.

  15. National Assessment of Geologic Carbon Dioxide Storage Resources -- Trends and Interpretations

    NASA Astrophysics Data System (ADS)

    Buursink, M. L.; Blondes, M. S.; Brennan, S.; Drake, R., II; Merrill, M. D.; Roberts-Ashby, T. L.; Slucher, E. R.; Warwick, P.

    2013-12-01

    In 2012, the U.S. Geological Survey (USGS) completed an assessment of the technically accessible storage resource (TASR) for carbon dioxide (CO2) in geologic formations underlying the onshore and State waters area of the United States. The formations assessed are at least 3,000 feet (914 meters) below the ground surface. The TASR is an estimate of the CO2 storage resource that may be available for CO2 injection and storage that is based on present-day geologic and hydrologic knowledge of the subsurface and current engineering practices. Individual storage assessment units (SAUs) for 36 basins or study areas were defined on the basis of geologic and hydrologic characteristics outlined in the USGS assessment methodology. The mean national TASR is approximately 3,000 metric gigatons. To augment the release of the assessment, this study reviews input estimates and output results as a part of the resource calculation. Included in this study are a collection of both cross-plots and maps to demonstrate our trends and interpretations. Alongside the assessment, the input estimates were examined for consistency between SAUs and cross-plotted to verify expected trends, such as decreasing storage formation porosity with increasing SAU depth, for instance, and to show a positive correlation between storage formation porosity and permeability estimates. Following the assessment, the output results were examined for correlation with selected input estimates. For example, there exists a positive correlation between CO2 density and the TASR, and between storage formation porosity and the TASR, as expected. These correlations, in part, serve to verify our estimates for the geologic variables. The USGS assessment concluded that the Coastal Plains Region of the eastern and southeastern United States contains the largest storage resource. Within the Coastal Plains Region, the storage resources from the U.S. Gulf Coast study area represent 59 percent of the national CO2 storage capacity. As part of this follow up study, additional maps were generated to show the geographic distribution of the input estimates and the output results across the U.S. For example, the distribution of the SAUs with fresh, saline or mixed formation water quality is shown. Also mapped is the variation in CO2 density as related to basin location and to related properties such as subsurface temperature and pressure. Furthermore, variation in the estimated SAU depth and resulting TASR are shown across the assessment study areas, and these depend on the geologic basin size and filling history. Ultimately, multiple map displays are possible with the complete data set of input estimates and range of reported results. The findings from this study show the effectiveness of the USGS methodology and the robustness of the assessment.

  16. Master control data handling program uses automatic data input

    NASA Technical Reports Server (NTRS)

    Alliston, W.; Daniel, J.

    1967-01-01

    General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.

  17. High-Resolution Water Footprints of Production of the United States

    NASA Astrophysics Data System (ADS)

    Marston, Landon; Ao, Yufei; Konar, Megan; Mekonnen, Mesfin M.; Hoekstra, Arjen Y.

    2018-03-01

    The United States is the largest producer of goods and services in the world. Rainfall, surface water supplies, and groundwater aquifers represent a fundamental input to economic production. Despite the importance of water resources to economic activity, we do not have consistent information on water use for specific locations and economic sectors. A national, spatially detailed database of water use by sector would provide insight into U.S. utilization and dependence on water resources for economic production. To this end, we calculate the water footprint of over 500 food, energy, mining, services, and manufacturing industries and goods produced in the United States. To do this, we employ a data intensive approach that integrates water footprint and input-output techniques into a novel methodological framework. This approach enables us to present the most detailed and comprehensive water footprint analysis of any country to date. This study broadly contributes to our understanding of water in the U.S. economy, enables supply chain managers to assess direct and indirect water dependencies, and provides opportunities to reduce water use through benchmarking. In fact, we find that 94% of U.S. industries could reduce their total water footprint more by sourcing from more water-efficient suppliers in their supply chain than they could by converting their own operations to be more water-efficient.

  18. Differential dpa calculations with SPECTRA-PKA

    NASA Astrophysics Data System (ADS)

    Gilbert, M. R.; Sublet, J.-Ch.

    2018-06-01

    The processing code SPECTRA-PKA produces energy spectra of primary atomic recoil events (or primary knock-on atoms, PKAs) for any material composition exposed to an irradiation spectrum. Such evaluations are vital inputs for simulations aimed at understanding the evolution of damage in irradiated material, which is generated in cascade displacement events initiated by PKAs. These PKA spectra present the full complexity of the input (to SPECTRA-PKA) nuclear data-library evaluations of recoil events. However, the commonly used displacements per atom (dpa) measure, which is an integral measure over all possible recoil events of the displacement damage dose, is still widely used and has many useful applications - as both a comparative and correlative quantity. This paper describes the methodology employed that allows the SPECTRA-PKA code to evaluate dpa rates using the energy-dependent recoil (PKA) cross section data used for the PKA distributions. This avoids the need for integral displacement kerma cross sections and also provides new insight into the relative importance of different reaction channels (and associated different daughter residual and emitted particles) to the total integrated dpa damage dose. Results are presented for Fe, Ni, W, and SS316. Fusion dpa rates are compared to those in fission, highlighting the increased contribution to damage creation in the former from high-energy threshold reactions.

  19. Taguchi Based Performance and Reliability Improvement of an Ion Chamber Amplifier for Enhanced Nuclear Reactor Safety

    NASA Astrophysics Data System (ADS)

    Kulkarni, R. D.; Agarwal, Vivek

    2008-08-01

    An ion chamber amplifier (ICA) is used as a safety device for neutronic power (flux) measurement in regulation and protection systems of nuclear reactors. Therefore, performance reliability of an ICA is an important issue. Appropriate quality engineering is essential to achieve a robust design and performance of the ICA circuit. It is observed that the low input bias current operational amplifiers used in the input stage of the ICA circuit are the most critical devices for proper functioning of the ICA. They are very sensitive to the gamma radiation present in their close vicinity. Therefore, the response of the ICA deteriorates with exposure to gamma radiation resulting in a decrease in the overall reliability, unless desired performance is ensured under all conditions. This paper presents a performance enhancement scheme for an ICA operated in the nuclear environment. The Taguchi method, which is a proven technique for reliability enhancement, has been used in this work. It is demonstrated that if a statistical, optimal design approach, like the Taguchi method is used, the cost of high quality and reliability may be brought down drastically. The complete methodology and statistical calculations involved are presented, as are the experimental and simulation results to arrive at a robust design of the ICA.

  20. Real time selective harmonic minimization for multilevel inverters using genetic algorithm and artifical neural network angle generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak

    2012-01-01

    The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memorymore » storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.« less

  1. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  2. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    PubMed

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.

  3. Comparison of measured and calculated dynamic loads for the Mod-2 2.5 mW wind turbine system

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.; Shipley, S. A.; Miller, R. D.

    1995-01-01

    The Boeing Company, under contract to the Electric Power Research Institute (EPRI), has completed a test program on the Mod-2 wind turbines at Goodnoe Hills, Washington. The objectives were to update fatigue load spectra, discern site and machine differences, measure vortex generator effects, and to evaluate rotational sampling techniques. This paper shows the test setup and loads instrumentation, loads data comparisons and test/analysis correlations. Test data are correlated with DYLOSAT predictions using both the NASA interim turbulence model and rotationally sampled winds as inputs. The latter is demonstrated to have the potential to improve the test/analysis correlations. The paper concludes with an assessment of the importance of vortex generators, site dependence, and machine differences on fatigue loads. The adequacy of prediction techniques used are evaluated and recommendations are made for improvements to the methodology.

  4. Determination of matrix composition based on solute-solute nearest-neighbor distances in atom probe tomography.

    PubMed

    De Geuser, F; Lefebvre, W

    2011-03-01

    In this study, we propose a fast automatic method providing the matrix concentration in an atom probe tomography (APT) data set containing two phases or more. The principle of this method relies on the calculation of the relative amount of isolated solute atoms (i.e., not surrounded by a similar solute atom) as a function of a distance d in the APT reconstruction. Simulated data sets have been generated to test the robustness of this new tool and demonstrate that rapid and reproducible results can be obtained without the need of any user input parameter. The method has then been successfully applied to a ternary Al-Zn-Mg alloy containing a fine dispersion of hardening precipitates. The relevance of this method for direct estimation of matrix concentration is discussed and compared with the existing methodologies. Copyright © 2010 Wiley-Liss, Inc.

  5. Hyper-X Mach 7 Scramjet Design, Ground Test and Flight Results

    NASA Technical Reports Server (NTRS)

    Ferlemann, Shelly M.; McClinton, Charles R.; Rock, Ken E.; Voland, Randy T.

    2005-01-01

    The successful Mach 7 flight test of the Hyper-X (X-43) research vehicle has provided the major, essential demonstration of the capability of the airframe integrated scramjet engine. This flight was a crucial first step toward realizing the potential for airbreathing hypersonic propulsion for application to space launch vehicles. However, it is not sufficient to have just achieved a successful flight. The more useful knowledge gained from the flight is how well the prediction methods matched the actual test results in order to have confidence that these methods can be applied to the design of other scramjet engines and powered vehicles. The propulsion predictions for the Mach 7 flight test were calculated using the computer code, SRGULL, with input from computational fluid dynamics (CFD) and wind tunnel tests. This paper will discuss the evolution of the Mach 7 Hyper-X engine, ground wind tunnel experiments, propulsion prediction methodology, flight results and validation of design methods.

  6. Dynamic analysis of process reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadle, L.J.; Lawson, L.O.; Noel, S.D.

    1995-06-01

    The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less

  7. Modelling the spatial distribution of ammonia emissions in the UK.

    PubMed

    Hellsten, S; Dragosits, U; Place, C J; Vieno, M; Dore, A J; Misselbrook, T H; Tang, Y S; Sutton, M A

    2008-08-01

    Ammonia emissions (NH3) are characterised by a high spatial variability at a local scale. When modelling the spatial distribution of NH3 emissions, it is important to provide robust emission estimates, since the model output is used to assess potential environmental impacts, e.g. exceedance of critical loads. The aim of this study was to provide a new, updated spatial NH3 emission inventory for the UK for the year 2000, based on an improved modelling approach and the use of updated input datasets. The AENEID model distributes NH3 emissions from a range of agricultural activities, such as grazing and housing of livestock, storage and spreading of manures, and fertilizer application, at a 1-km grid resolution over the most suitable landcover types. The results of the emission calculation for the year 2000 are analysed and the methodology is compared with a previous spatial emission inventory for 1996.

  8. Energy Savings of Low-E Storm Windows and Panels across US Climate Zones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culp, Thomas D.; Cort, Katherine A.

    This report builds off of previous modeling work related to low-e storm windows used to create a "Database of U.S. Climate-Based Analysis for Low-E Storm Windows." This work updates similar studies using new fuel costs and examining the separate contributions of reduced air leakage and reduced coefficients of overall heat transfer and solar heat gain. In this report we examine the energy savings and cost effectiveness of low-E storm windows in residential homes across a broad range of U.S. climates, excluding the impact from infiltration reductions, which tend to vary using the National Energy Audit Tool (NEAT) and RESFEN modelmore » calculations. This report includes a summary of the results, NEAT and RESFEN background, methodology, and input assumptions, and an appendix with detailed results and assumptions by climate zone.« less

  9. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  10. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  11. SUDOQU, a new dose-assessment methodology for radiological surface contamination.

    PubMed

    van Dillen, Teun; van Dijk, Arjan

    2018-06-12

    A new methodology has been developed for the assessment of the annual effective dose resulting from removable and fixed radiological surface contamination. It is entitled SUDOQU (SUrface DOse QUantification) and it can for instance be used to derive criteria for surface contamination related to the import of non-food consumer goods, containers and conveyances, e.g., limiting values and operational screening levels. SUDOQU imposes mass (activity)-balance equations based on radioactive decay, removal and deposition processes in indoor and outdoor environments. This leads to time-dependent contamination levels that may be of particular importance in exposure scenarios dealing with one or a few contaminated items only (usually public exposure scenarios, therefore referred to as the 'consumer' model). Exposure scenarios with a continuous flow of freshly contaminated goods also fall within the scope of the methodology (typically occupational exposure scenarios, thus referred to as the 'worker model'). In this paper we describe SUDOQU, its applications, and its current limitations. First, we delineate the contamination issue, present the assumptions and explain the concepts. We describe the relevant removal, transfer, and deposition processes, and derive equations for the time evolution of the radiological surface-, air- and skin-contamination levels. These are then input for the subsequent evaluation of the annual effective dose with possible contributions from external gamma radiation, inhalation, secondary ingestion (indirect, from hand to mouth), skin contamination, direct ingestion and skin-contact exposure. The limiting effective surface dose is introduced for issues involving the conservatism of dose calculations. SUDOQU can be used by radiation-protection scientists/experts and policy makers in the field of e.g. emergency preparedness, trade and transport, exemption and clearance, waste management, and nuclear facilities. Several practical examples are worked out demonstrating the potential applications of the methodology. . Creative Commons Attribution license.

  12. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  13. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  14. A Short-Segment Fourier Transform Methodology

    DTIC Science & Technology

    2009-03-01

    defined sampling of the continuous-valued discrete-time Fourier transform, superresolution in the frequency domain and allowance of Dirac delta functions associated with pure sinusoidal input data components.

  15. A simplified approach to determine the carbon footprint of a region: Key learning points from a Galician study.

    PubMed

    Roibás, Laura; Loiseau, Eléonore; Hospido, Almudena

    2018-07-01

    On a previous study, the carbon footprint (CF) of all production and consumption activities of Galicia, an Autonomous Community located in the north-west of Spain, was determined and the results were used to devise strategies aimed at the reduction and mitigation of the greenhouse gas (GHG) emissions. The territorial LCA methodology was used there to perform the calculations. However, that methodology was initially designed to compute the emissions of all types of polluting substances to the environment (several thousands of substances considered in the life cycle inventories), aimed at performing complete LCA studies. This requirement implies the use of specific modelling approaches and databases that in turn raised some difficulties, i.e., need of large amounts of data (which increased gathering times), low temporal, geographical and technological representativeness of the study, lack of data, and presence of double counting issues when trying to combine the sectorial CF results into those of the total economy. In view of these of difficulties, and considering the need to focus only on GHG emissions, it seems important to improve the robustness of the CF computation while proposing a simplified methodology. This study is the result of those efforts to improve the aforementioned methodology. In addition to the territorial LCA approach, several Input-Output (IO) based alternatives have been used here to compute direct and indirect GHG emissions of all Galician production and consumption activities. The results of the different alternatives were compared and evaluated under a multi-criteria approach considering reliability, completeness, temporal and geographical correlation, applicability and consistency. Based on that, an improved and simplified methodology was proposed to determine the CF of the Galician consumption and production activities from a total responsibility perspective. This methodology adequately reflects the current characteristics of the Galician economy, thus increasing the representativeness of the results, and can be applied to any region in which IO tables and environmental vectors are available. This methodology could thus provide useful information in decision making processes to reduce and prevent GHG emissions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, H.B.; Einerson, C.J.; Watkins, A.D.

    1987-08-10

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.

  17. Noise produced by turbulent flow into a rotor: Users manual for noise calculation

    NASA Technical Reports Server (NTRS)

    Amiet, R. K.; Egolf, C. G.; Simonich, J. C.

    1989-01-01

    A users manual for a computer program for the calculation of noise produced by turbulent flow into a helicopter rotor is presented. These inputs to the program are obtained from the atmospheric turbulence model and mean flow distortion calculation, described in another volume of this set of reports. Descriptions of the various program modules and subroutines, their function, programming structure, and the required input and output variables are included. This routine is incorporated as one module of NASA's ROTONET helicopter noise prediction program.

  18. Impact of regulation on English and Welsh water-only companies: an input-distance function approach.

    PubMed

    Molinos-Senante, María; Porcher, Simon; Maziotis, Alexandros

    2017-07-01

    The assessment of productivity change over time and its drivers is of great significance for water companies and regulators when setting urban water tariffs. This issue is even more relevant in privatized water industries, such as those in England and Wales, where the price-cap regulation is adopted. In this paper, an input-distance function is used to estimate productivity change and its determinants for the English and Welsh water-only companies (WoCs) over the period of 1993-2009. The impacts of several exogenous variables on companies' efficiencies are also explored. From a policy perspective, this study describes how regulators can use this type of modeling and results to calculate illustrative X factors for the WoCs. The results indicate that the 1994 and 1999 price reviews stimulated technical change, and there were small efficiency gains. However, the 2004 price review did not accelerate efficiency change or improve technical change. The results also indicated that during the whole period of study, the excessive scale of the WoCs contributed negatively to productivity growth. On average, WoCs reported relatively high efficiency levels, which suggests that they had already been investing in technologies that reduce long-term input requirements with respect to exogenous and service-quality variables. Finally, an average WoC needs to improve its productivity toward that of the best company by 1.58%. The methodology and results of this study are of great interest to both regulators and water-company managers for evaluating the effectiveness of regulation and making informed decisions.

  19. Quality by design for herbal drugs: a feedforward control strategy and an approach to define the acceptable ranges of critical quality attributes.

    PubMed

    Yan, Binjun; Li, Yao; Guo, Zhengtai; Qu, Haibin

    2014-01-01

    The concept of quality by design (QbD) has been widely accepted and applied in the pharmaceutical manufacturing industry. There are still two key issues to be addressed in the implementation of QbD for herbal drugs. The first issue is the quality variation of herbal raw materials and the second issue is the difficulty in defining the acceptable ranges of critical quality attributes (CQAs). To propose a feedforward control strategy and a method for defining the acceptable ranges of CQAs for the two issues. In the case study of the ethanol precipitation process of Danshen (Radix Salvia miltiorrhiza) injection, regression models linking input material attributes and process parameters to CQAs were built first and an optimisation model for calculating the best process parameters according to the input materials was established. Then, the feasible material space was defined and the acceptable ranges of CQAs for the previous process were determined. In the case study, satisfactory regression models were built with cross-validated regression coefficients (Q(2) ) all above 91 %. The feedforward control strategy was applied successfully to compensate the quality variation of the input materials, which was able to control the CQAs in the 90-110 % ranges of the desired values. In addition, the feasible material space for the ethanol precipitation process was built successfully, which showed the acceptable ranges of the CQAs for the concentration process. The proposed methodology can help to promote the implementation of QbD for herbal drugs. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  1. Momentum distributions for H 2 ( e , e ' p )

    DOE PAGES

    Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.

    2014-12-29

    [Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less

  2. 76 FR 82115 - Wage Methodology for the Temporary Non-Agricultural Employment H-2B Program; Delay of Effective Date

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-30

    ... year (FY) 2012. The Wage Rule revised the methodology by which we calculate the prevailing wages to be... 19, 2011, 76 FR 3452. The Wage Rule revised the methodology by which we calculate the prevailing... November 30, 2011. When the Wage Rule goes into effect, it will supersede and make null the prevailing wage...

  3. Effect of pore water velocities and solute input methods on chloride transport in the undisturbed soil columns of Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, BeiBei; Wang, QuanJiu

    2017-09-01

    Studies on solute transport under different pore water velocity and solute input methods in undisturbed soil could play instructive roles for crop production. Based on the experiments in the laboratory, the effect of solute input methods with small pulse input and large pulse input, as well as four pore water velocities, on chloride transport in the undisturbed soil columns obtained from the Loess Plateau under controlled condition was studied. Chloride breakthrough curves (BTCs) were generated using the miscible displacement method under water-saturated, steady flow conditions. Using the 0.15 mol L-1 CaCl2 solution as a tracer, a small pulse (0.1 pore volumes) was first induced, and then, after all the solution was wash off, a large pulse (0.5 pore volumes) was conducted. The convection-dispersion equation (CDE) and the two-region model (T-R) were used to describe the BTCs, and their prediction accuracies and fitted parameters were compared as well. All the BTCs obtained for the different input methods and the four pore water velocities were all smooth. However, the shapes of the BTCs varied greatly; small pulse inputs resulted in more rapid attainment of peak values that appeared earlier with increases in pore water velocity, whereas large pulse inputs resulted in an opposite trend. Both models could fit the experimental data well, but the prediction accuracy of the T-R was better. The values of the dispersivity, λ, calculated from the dispersion coefficient obtained from the CDE were about one order of magnitude larger than those calculated from the dispersion coefficient given by the T-R, but the calculated Peclet number, Pe, was lower. The mobile-immobile partition coefficient, β, decreased, while the mass exchange coefficient increased with increases in pore water velocity.

  4. Isotopic tracing for calculating the surface density of arginine-glycine-aspartic acid-containing peptide on allogeneic bone.

    PubMed

    Hou, Xiao-bin; Hu, Yong-cheng; He, Jin-quan

    2013-02-01

    To investigate the feasibility of determining the surface density of arginine-glycine-aspartic acid (RGD) peptides grafted onto allogeneic bone by an isotopic tracing method involving labeling these peptides with (125) I, evaluating the impact of the input concentration of RGD peptides on surface density and establishing the correlation between surface density and their input concentration. A synthetic RGD-containing polypeptide (EPRGDNYR) was labeled with (125) I and its specific radioactivity calculated. Reactive solutions of RGD peptide with radioactive (125) I-RGD as probe with input concentrations of 0.01 mg/mL, 0.10 mg/mL, 0.50 mg/mL, 1.00 mg/mL, 2.00 mg/mL and 4.00 mg/mL were prepared. Using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide as a cross-linking agent, reactions were induced by placing allogeneic bone fragments into reactive solutions of RGD peptide of different input concentrations. On completion of the reactions, the surface densities of RGD peptides grafted onto the allogeneic bone fragments were calculated by evaluating the radioactivity and surface areas of the bone fragments. The impact of input concentration of RGD peptides on surface density was measured and a curve constructed. Measurements by a radiodensity γ-counter showed that the RGD peptides had been labeled successfully with (125) I. The allogeneic bone fragments were radioactive after the reaction, demonstrating that the RGD peptides had been successfully grafted onto their surfaces. It was also found that with increasing input concentration, the surface density increased. It was concluded that the surface density of RGD peptides is quantitatively related to their input concentration. With increasing input concentration, the surface density gradually increases to saturation value. © 2013 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.

  5. Development of weight and cost estimates for lifting surfaces with active controls

    NASA Technical Reports Server (NTRS)

    Anderson, R. D.; Flora, C. C.; Nelson, R. M.; Raymond, E. T.; Vincent, J. H.

    1976-01-01

    Equations and methodology were developed for estimating the weight and cost incrementals due to active controls added to the wing and horizontal tail of a subsonic transport airplane. The methods are sufficiently generalized to be suitable for preliminary design. Supporting methodology and input specifications for the weight and cost equations are provided. The weight and cost equations are structured to be flexible in terms of the active control technology (ACT) flight control system specification. In order to present a self-contained package, methodology is also presented for generating ACT flight control system characteristics for the weight and cost equations. Use of the methodology is illustrated.

  6. An implementation of an aeroacoustic prediction model for broadband noise from a vertical axis wind turbine using a CFD informed methodology

    NASA Astrophysics Data System (ADS)

    Botha, J. D. M.; Shahroki, A.; Rice, H.

    2017-12-01

    This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.

  7. Wind Technology Modeling Within the System Advisor Model (SAM) (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, N.; Dobos, A.; Ferguson, T.

    This poster provides detail for implementation and the underlying methodology for modeling wind power generation performance in the National Renewable Energy Laboratory's (NREL's) System Advisor Model (SAM). SAM's wind power model allows users to assess projects involving one or more large or small wind turbines with any of the detailed options for residential, commercial, or utility financing. The model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs, and provides analysis to compare the absolute or relative impact of these inputs. SAM is a system performance and economic model designed to facilitate analysismore » and decision-making for project developers, financers, policymakers, and energy researchers. The user pairs a generation technology with a financing option (residential, commercial, or utility) to calculate the cost of energy over the multi-year project period. Specifically, SAM calculates the value of projects which buy and sell power at retail rates for residential and commercial systems, and also for larger-scale projects which operate through a power purchase agreement (PPA) with a utility. The financial model captures complex financing and rate structures, taxes, and incentives.« less

  8. Energy Efficiency of Biogas Produced from Different Biomass Sources

    NASA Astrophysics Data System (ADS)

    Begum, Shahida; Nazri, A. H.

    2013-06-01

    Malaysia has different sources of biomass like palm oil waste, agricultural waste, cow dung, sewage waste and landfill sites, which can be used to produce biogas and as a source of energy. Depending on the type of biomass, the biogas produced can have different calorific value. At the same time the energy, being used to produce biogas is dependent on transportation distance, means of transportation, conversion techniques and for handling of raw materials and digested residues. An energy systems analysis approach based on literature is applied to calculate the energy efficiency of biogas produced from biomass. Basically, the methodology is comprised of collecting data, proposing locations and estimating the energy input needed to produce biogas and output obtained from the generated biogas. The study showed that palm oil and municipal solid waste is two potential sources of biomass. The energy efficiency of biogas produced from palm oil residues and municipal solid wastes is 1.70 and 3.33 respectively. Municipal solid wastes have the higher energy efficiency due to less transportation distance and electricity consumption. Despite the inherent uncertainties in the calculations, it can be concluded that the energy potential to use biomass for biogas production is a promising alternative.

  9. Improving Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-02-01

    New test procedure evaluates quality and accuracy of energy analysis tools for the residential building retrofit market. Reducing the energy use of existing homes in the United States offers significant energy-saving opportunities, which can be identified through building simulation software tools that calculate optimal packages of efficiency measures. To improve the accuracy of energy analysis for residential buildings, the National Renewable Energy Laboratory's (NREL) Buildings Research team developed the Building Energy Simulation Test for Existing Homes (BESTEST-EX), a method for diagnosing and correcting errors in building energy audit software and calibration procedures. BESTEST-EX consists of building physics and utility billmore » calibration test cases, which software developers can use to compare their tools simulation findings to reference results generated with state-of-the-art simulation tools. Overall, the BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX is helping software developers identify and correct bugs in their software, as well as develop and test utility bill calibration procedures.« less

  10. Monte Carlo simulation of β γ coincidence system using plastic scintillators in 4π geometry

    NASA Astrophysics Data System (ADS)

    Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.

    2007-09-01

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.

  11. Methodological Foundations for Designing Intelligent Computer-Based Training

    DTIC Science & Technology

    1991-09-03

    student models, graphic forms, version control data structures, flowcharts , etc. Circuit simulations are an obvious case. A circuit, after all, can... flowcharts as a basic data structure, and we were able to generalize our tools to create a flowchart drawing tool for inputting both the appearance and...the meaning of flowcharts efficiently. For the Sherlock work, we built a tool that permitted inputting of information about front panels and

  12. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  13. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  14. Mixed H2/Hinfinity output-feedback control of second-order neutral systems with time-varying state and input delays.

    PubMed

    Karimi, Hamid Reza; Gao, Huijun

    2008-07-01

    A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.

  15. Ensuring the validity of calculated subcritical limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, H.K.

    1977-01-01

    The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less

  16. Rapid Airplane Parametric Input Design (RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool ADIFOR to the grid generation program. The output of ADIFOR is a new source code containing the old code plus expressions for derivatives of specified dependent variables (grid coordinates) with respect to specified independent variables (design parameters). The RAPID methodology and software provide a means of rapidly defining numerical prototypes, grids, and grid sensitivity of a class of airplane configurations. This technology and software is highly useful for CFD research for preliminary design and optimization processes.

  17. Status and Opportunities for Improving the Consistency of Technical Reference Manuals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jayaweera, Tina; Velonis, Aquila; Haeri, Hossein

    Across the United States, energy-efficiency program administrators rely on Technical Reference Manuals (TRMs) as sources for calculations and deemed savings values for specific, well-defined efficiency measures. TRMs play an important part in energy efficiency program planning by providing a common and consistent source for calculation of ex ante and often ex post savings. They thus help reduce energy-efficiency resource acquisition costs by obviating the need for extensive measurement and verification and lower performance risk for program administrators and implementation contractors. This paper considers the benefits of establishing region-wide or national TRMs and considers the challenges of such undertaking due tomore » the difficulties in comparing energy savings across jurisdictions. We argue that greater consistency across TRMs in the approaches used to determine deemed savings values, with more transparency about assumptions, would allow better comparisons in savings estimates across jurisdictions as well as improve confidence in reported efficiency measure savings. To support this thesis, we review approaches for the calculation of savings for select measures in TRMs currently in use in 17 jurisdictions. The review reveals differences in the saving methodologies, technical assumptions, and input variables used for estimating deemed savings values. These differences are described and their implications are summarized, using four, common energy-efficiency measures as examples. Recommendations are then offered for establishing a uniform approach for determining deemed savings values.« less

  18. Computational fluid dynamic modeling of a medium-sized surface mine blasthole drill shroud

    PubMed Central

    Zheng, Y.; Reed, W.R.; Zhou, L.; Rider, J.P.

    2016-01-01

    The Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health (NIOSH) recently developed a series of models using computational fluid dynamics (CFD) to study airflows and respirable dust distribution associated with a medium-sized surface blasthole drill shroud with a dry dust collector system. Previously run experiments conducted in NIOSH’s full-scale drill shroud laboratory were used to validate the models. The setup values in the CFD models were calculated from experimental data obtained from the drill shroud laboratory and measurements of test material particle size. Subsequent simulation results were compared with the experimental data for several test scenarios, including 0.14 m3/s (300 cfm) and 0.24 m3/s (500 cfm) bailing airflow with 2:1, 3:1 and 4:1 dust collector-to-bailing airflow ratios. For the 2:1 and 3:1 ratios, the calculated dust concentrations from the CFD models were within the 95 percent confidence intervals of the experimental data. This paper describes the methodology used to develop the CFD models, to calculate the model input and to validate the models based on the experimental data. Problem regions were identified and revealed by the study. The simulation results could be used for future development of dust control methods for a surface mine blasthole drill shroud. PMID:27932851

  19. Quantifying the relative contributions of different solute carriers to aggregate substrate transport

    PubMed Central

    Taslimifar, Mehdi; Oparija, Lalita; Verrey, Francois; Kurtcuoglu, Vartan; Olgac, Ufuk; Makrides, Victoria

    2017-01-01

    Determining the contributions of different transporter species to overall cellular transport is fundamental for understanding the physiological regulation of solutes. We calculated the relative activities of Solute Carrier (SLC) transporters using the Michaelis-Menten equation and global fitting to estimate the normalized maximum transport rate for each transporter (Vmax). Data input were the normalized measured uptake of the essential neutral amino acid (AA) L-leucine (Leu) from concentration-dependence assays performed using Xenopus laevis oocytes. Our methodology was verified by calculating Leu and L-phenylalanine (Phe) data in the presence of competitive substrates and/or inhibitors. Among 9 potentially expressed endogenous X. laevis oocyte Leu transporter species, activities of only the uniporters SLC43A2/LAT4 (and/or SLC43A1/LAT3) and the sodium symporter SLC6A19/B0AT1 were required to account for total uptake. Furthermore, Leu and Phe uptake by heterologously expressed human SLC6A14/ATB0,+ and SLC43A2/LAT4 was accurately calculated. This versatile systems biology approach is useful for analyses where the kinetics of each active protein species can be represented by the Hill equation. Furthermore, its applicable even in the absence of protein expression data. It could potentially be applied, for example, to quantify drug transporter activities in target cells to improve specificity. PMID:28091567

  20. Projected changes to growth and mortality of Hawaiian corals over the next 100 years

    USGS Publications Warehouse

    Hoeke, R.K.; Jokiel, P.L.; Buddemeier, R.W.; Brainard, R.E.

    2011-01-01

    Background: Recent reviews suggest that the warming and acidification of ocean surface waters predicated by most accepted climate projections will lead to mass mortality and declining calcification rates of reef-building corals. This study investigates the use of modeling techniques to quantitatively examine rates of coral cover change due to these effects. Methodology/Principal Findings: Broad-scale probabilities of change in shallow-water scleractinian coral cover in the Hawaiian Archipelago for years 2000-2099 A.D. were calculated assuming a single middle-of-the-road greenhouse gas emissions scenario. These projections were based on ensemble calculations of a growth and mortality model that used sea surface temperature (SST), atmospheric carbon dioxide (CO2), observed coral growth (calcification) rates, and observed mortality linked to mass coral bleaching episodes as inputs. SST and CO2 predictions were derived from the World Climate Research Programme (WCRP) multi-model dataset, statistically downscaled with historical data. Conclusions/Significance: The model calculations illustrate a practical approach to systematic evaluation of climate change effects on corals, and also show the effect of uncertainties in current climate predictions and in coral adaptation capabilities on estimated changes in coral cover. Despite these large uncertainties, this analysis quantitatively illustrates that a large decline in coral cover is highly likely in the 21st Century, but that there are significant spatial and temporal variances in outcomes, even under a single climate change scenario.

  1. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  2. Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale

    NASA Astrophysics Data System (ADS)

    González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.

    2017-12-01

    Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).

  3. Implementation of Recommendations from the One System Comparative Evaluation of the Hanford Tank Farms and Waste Treatment Plant Safety Bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, Richard L.; Niemi, Belinda J.; Paik, Ingle K.

    2013-11-07

    A Comparative Evaluation was conducted for One System Integrated Project Team to compare the safety bases for the Hanford Waste Treatment and Immobilization Plant Project (WTP) and Tank Operations Contract (TOC) (i.e., Tank Farms) by an Expert Review Team. The evaluation had an overarching purpose to facilitate effective integration between WTP and TOC safety bases. It was to provide One System management with an objective evaluation of identified differences in safety basis process requirements, guidance, direction, procedures, and products (including safety controls, key safety basis inputs and assumptions, and consequence calculation methodologies) between WTP and TOC. The evaluation identified 25more » recommendations (Opportunities for Integration). The resolution of these recommendations resulted in 16 implementation plans. The completion of these implementation plans will help ensure consistent safety bases for WTP and TOC along with consistent safety basis processes. procedures, and analyses. and should increase the likelihood of a successful startup of the WTP. This early integration will result in long-term cost savings and significant operational improvements. In addition, the implementation plans lead to the development of eight new safety analysis methodologies that can be used at other U.S. Department of Energy (US DOE) complex sites where URS Corporation is involved.« less

  4. 40 CFR 96.142 - CAIR NOX allowance allocations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the 3 highest amounts of the unit's adjusted control period heat input for 2000 through 2004, with the adjusted control period heat input for each year calculated as follows: (A) If the unit is coal-fired... CAIR NOX Allowance Allocations § 96.142 CAIR NOX allowance allocations. (a)(1) The baseline heat input...

  5. Field measurement of moisture-buffering model inputs for residential buildings

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  6. A New Local Bipolar Autoassociative Memory Based on External Inputs of Discrete Recurrent Neural Networks With Time Delay.

    PubMed

    Zhou, Caigen; Zeng, Xiaoqin; Luo, Chaomin; Zhang, Huaguang

    In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.In this paper, local bipolar auto-associative memories are presented based on discrete recurrent neural networks with a class of gain type activation function. The weight parameters of neural networks are acquired by a set of inequalities without the learning procedure. The global exponential stability criteria are established to ensure the accuracy of the restored patterns by considering time delays and external inputs. The proposed methodology is capable of effectively overcoming spurious memory patterns and achieving memory capacity. The effectiveness, robustness, and fault-tolerant capability are validated by simulated experiments.

  7. Multi-scale landslide hazard assessment: Advances in global and regional methodologies

    NASA Astrophysics Data System (ADS)

    Kirschbaum, Dalia; Peters-Lidard, Christa; Adler, Robert; Hong, Yang

    2010-05-01

    The increasing availability of remotely sensed surface data and precipitation provides a unique opportunity to explore how smaller-scale landslide susceptibility and hazard assessment methodologies may be applicable at larger spatial scales. This research first considers an emerging satellite-based global algorithm framework, which evaluates how the landslide susceptibility and satellite derived rainfall estimates can forecast potential landslide conditions. An analysis of this algorithm using a newly developed global landslide inventory catalog suggests that forecasting errors are geographically variable due to improper weighting of surface observables, resolution of the current susceptibility map, and limitations in the availability of landslide inventory data. These methodological and data limitation issues can be more thoroughly assessed at the regional level, where available higher resolution landslide inventories can be applied to empirically derive relationships between surface variables and landslide occurrence. The regional empirical model shows improvement over the global framework in advancing near real-time landslide forecasting efforts; however, there are many uncertainties and assumptions surrounding such a methodology that decreases the functionality and utility of this system. This research seeks to improve upon this initial concept by exploring the potential opportunities and methodological structure needed to advance larger-scale landslide hazard forecasting and make it more of an operational reality. Sensitivity analysis of the surface and rainfall parameters in the preliminary algorithm indicates that surface data resolution and the interdependency of variables must be more appropriately quantified at local and regional scales. Additionally, integrating available surface parameters must be approached in a more theoretical, physically-based manner to better represent the physical processes underlying slope instability and landslide initiation. Several rainfall infiltration and hydrological flow models have been developed to model slope instability at small spatial scales. This research investigates the potential of applying a more quantitative hydrological model to larger spatial scales, utilizing satellite and surface data inputs that are obtainable over different geographic regions. Due to the significant role that data and methodological uncertainties play in the effectiveness of landslide hazard assessment outputs, the methodology and data inputs are considered within an ensemble uncertainty framework in order to better resolve the contribution and limitations of model inputs and to more effectively communicate the model skill for improved landslide hazard assessment.

  8. Thermal and mass implications of magmatic evolution in the Lassen volcanic region, California, and minimum constraints on basalt influx to the lower crust

    USGS Publications Warehouse

    Guffanti, M.; Clynne, M.A.; Muffler, L.J.P.

    1996-01-01

    We have analyzed the heat and mass demands of a petrologic model of basaltdriven magmatic evolution in which variously fractionated mafic magmas mix with silicic partial melts of the lower crust. We have formulated steady state heat budgets for two volcanically distinct areas in the Lassen region: the large, late Quaternary, intermediate to silicic Lassen volcanic center and the nearby, coeval, less evolved Caribou volcanic field. At Caribou volcanic field, heat provided by cooling and fractional crystallization of 52 km3 of basalt is more than sufficient to produce 10 km3 of rhyolitic melt by partial melting of lower crust. Net heat added by basalt intrusion at Caribou volcanic field is equivalent to an increase in lower crustal heat flow of ???7 mW m-2, indicating that the field is not a major crustal thermal anomaly. Addition of cumulates from fractionation is offset by removal of erupted partial melts. A minimum basalt influx of 0.3 km3 (km2 Ma)-1 is needed to supply Caribou volcanic field. Our methodology does not fully account for an influx of basalt that remains in the crust as derivative intrusives. On the basis of comparison to deep heat flow, the input of basalt could be ???3 to 7 times the amount we calculate. At Lassen volcanic center, at least 203 km3 of mantle-derived basalt is needed to produce 141 km3 of partial melt and drive the volcanic system. Partial melting mobilizes lower crustal material, augmenting the magmatic volume available for eruption at Lassen volcanic center; thus the erupted volume of 215 km3 exceeds the calculated basalt input of 203 km3. The minimum basalt input of 1.6 km3 (km2 Ma)-1 is >5 times the minimum influx to the Caribou volcanic field. Basalt influx high enough to sustain considerable partial melting, coupled with locally high extension rate, is a crucial factor in development of Lassen volcanic center; in contrast. Caribou volcanic field has failed to develop into a large silicic center primarily because basalt supply there has been insufficient.

  9. CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Rolf; Paget, Maria L.; Richman, Eric E.

    2011-03-31

    With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less

  10. Multi-criteria decision analysis using hydrological indicators for decision support - a conceptual framework.

    NASA Astrophysics Data System (ADS)

    Butchart-Kuhlmann, Daniel; Kralisch, Sven; Meinhardt, Markus; Fleischer, Melanie

    2017-04-01

    Assessing the quantity and quality of water available in water stressed environments under various potential climate and land-use changes is necessary for good water and environmental resources management and governance. Within the region covered by the Southern African Science Service Centre for Climate Change and Adaptive Land Management (SASSCAL) project, such areas are common. One goal of the SASSCAL project is to develop and provide an integrated decision support system (DSS) with which decision makers (DMs) within a given catchment can obtain objective information regarding potential changes in water flow quantity and timing. The SASSCAL DSS builds upon existing data storage and distribution capability, through the SASSCAL Information System (IS), as well as the J2000 hydrological model. Using output from validated J2000 models, the SASSCAL DSS incorporates the calculation of a range of hydrological indicators based upon Indicators of Hydrological Alteration/Environmental Flow Components (IHA/EFC) calculated for a historic time series (pre-impact) and a set of model simulations based upon a selection of possible climate and land-use change scenarios (post-impact). These indicators, obtained using the IHA software package, are then used as input for a multi-criteria decision analysis (MCDA) undertaken using the open source diviz software package. The results of these analyses will provide DMs with an indication as to how various hydrological indicators within a catchment may be altered under different future scenarios, as well providing a ranking of how each scenario is preferred according to different DM preferences. Scenarios are represented through a combination of model input data and parameter settings in J2000, and preferences are represented through criteria weighting in the MCDA. Here, the methodology is presented and applied to the J2000 Luanginga model results using a set of hypothetical decision maker preference values as input for an MCDA based on the PROMETHEE II outranking method. Future work on the SASSCAL DSS will entail automation of this process, as well as its application to other hydrological models and land-use and/or climate change scenarios.

  11. Sensitivity analysis of the FEMA HAZUS-MH MR4 Earthquake Model using seismic events affecting King County Washington

    NASA Astrophysics Data System (ADS)

    Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.

    2010-12-01

    HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region coordinators can most effectively utilize their resources for earthquake risk mitigation. This study is being conducted in collaboration with King County, WA officials to determine the best model inputs necessary to generate robust HAZUS-MH models for the Pacific Northwest.

  12. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  13. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  14. The NBS Energy Model Assessment project: Summary and overview

    NASA Astrophysics Data System (ADS)

    Gass, S. I.; Hoffman, K. L.; Jackson, R. H. F.; Joel, L. S.; Saunders, P. B.

    1980-09-01

    The activities and technical reports for the project are summarized. The reports cover: assessment of the documentation of Midterm Oil and Gas Supply Modeling System; analysis of the model methodology characteristics of the input and other supporting data; statistical procedures undergirding construction of the model and sensitivity of the outputs to variations in input, as well as guidelines and recommendations for the role of these in model building and developing procedures for their evaluation.

  15. A radiation model for calculating atmospheric corrections to remotely sensed infrared measurements, version 2

    NASA Technical Reports Server (NTRS)

    Boudreau, R. D.

    1973-01-01

    A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.

  16. Methodological studies on the VVER-440 control assembly calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hordosy, G.; Kereszturi, A.; Maraczy, C.

    1995-12-31

    The control assembly regions of VVER-440 reactors are represented by 2-group albedo matrices in the global calculations of the KARATE code system. Some methodological aspects of calculating albedo matrices with the COLA transport code are presented. Illustrations are given how these matrices depend on the relevant parameters describing the boron steel and steel regions of the control assemblies. The calculation of the response matrix for a node consisting of two parts filled with different materials is discussed.

  17. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.

  18. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  19. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  20. Optimizing Force Deployment and Force Structure for the Rapid Deployment Force

    DTIC Science & Technology

    1984-03-01

    Analysis . . . . .. .. ... ... 97 Experimental Design . . . . . .. .. .. ... 99 IX. Use of a Flexible Response Surface ........ 10.2 Selection of a...setS . ere designe . arun, programming methodology , where the require: s.stem re..r is input and the model optimizes the num=er. :::pe, cargo. an...to obtain new computer outputs" (Ref 38:23). The methodology can be used with any decision model, linear or nonlinear. Experimental Desion Since the

  1. A normative price for a manufactured product: The SAMICS methodology. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.

    1979-01-01

    A summary for the Solar Array Manufacturing Industry Costing Standards report contains a discussion of capabilities and limitations, a non-technical overview of the methodology, and a description of the input data which must be collected. It also describes the activities that were and are being taken to ensure validity of the results and contains an up-to-date bibliography of related documents.

  2. A strategy for developing a launch vehicle system for orbit insertion: Methodological aspects

    NASA Astrophysics Data System (ADS)

    Klyushnikov, V. Yu.; Kuznetsov, I. I.; Osadchenko, A. S.

    2014-12-01

    The article addresses methodological aspects of a development strategy to design a launch vehicle system for orbit insertion. The development and implementation of the strategy are broadly outlined. An analysis is provided of the criterial base and input data needed to define the main requirements for the launch vehicle system. Approaches are suggested for solving individual problems in working out the launch vehicle system development strategy.

  3. Re-Engineering the Stomatopod Eye, Nature’s Most Comprehensive Visual Sensor

    DTIC Science & Technology

    2017-02-22

    polarisation and colour processing by the brain of stomatopods  Two new methodologies introduced, polarisation distance and intuitive polarisation display...combination of our current state of knowledge and fresh intellectual and methodological input to the project, we aimed to explain the complexity of...achieve an optimal reflective silvery camouflage by controlling the non -polarizing properties of the skin (Jordan et al 2012, 2013, 2014, Roberts et

  4. Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

    PubMed Central

    Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto

    2007-01-01

    The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.

  5. 76 FR 71431 - Civil Penalty Calculation Methodology

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ... DEPARTMENT OF TRANSPORTATION Federal Motor Carrier Safety Administration Civil Penalty Calculation... is currently evaluating its civil penalty methodology. Part of this evaluation includes a forthcoming... civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The...

  6. Prioritising and planning of urban stormwater treatment in the Alna watercourse in Oslo.

    PubMed

    Nordeidet, B; Nordeide, T; Astebøl, S O; Hvitved-Jacobsen, T

    2004-12-01

    The Oslo municipal Water and Sewage Works (VAV) intends to improve the water quality in the Alna watercourse, in particular, with regards to the biological diversity. In order to reduce existing discharges of polluted urban stormwater, a study has been carried out to rank subcatchment areas in descending order of magnitude and to assess possible measures. An overall ranking methodology was developed in order to identify and select the most suitable subcatchment areas for further assessment studies (74 subcatchment/drainage areas). The municipality's comprehensive geographical information system (GIS) was applied as a base for the ranking. A weighted ranking based on three selected parameters was chosen from several major influencing factors, namely total yearly discharge (kg pollution/year), specific pollution discharge (kg/area/year) and existing stormwater system (pipe lengths/area). Results show that the highest 15 ranked catchment areas accounted for 70% of the total calculated pollution load of heavy metals. The highest ranked areas are strongly influenced by three major highways. Based on the results from similar field studies, it would be possible to remove 75-85% of total solids and about 50-80% of heavy metals using wet detention ponds as Best Available Technology (BAT). Based on the final ranking, two subcatchment areas were selected for further practical assessment of possible measures. VAV plans to use wet detention ponds, in combination with other measures when relevant, to treat the urban runoff. Using calculated loading and aerial photographs (all done in the same GIS environment), a preliminary sketch design and location of ponds were performed. The resulting GIS methodology for urban stormwater management will be used as input to a holistic and long-term planning process for the management of the watercourse, taking into account future urban development and other pollution sources.

  7. Application of an integrated flight/propulsion control design methodology to a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Mattern, Duane L.

    1991-01-01

    Results are presented from the application of an emerging Integrated Flight/Propulsion Control (IFPC) design methodology to a Short Take Off and Vertical Landing (STOVL) aircraft in transition flight. The steps in the methodology consist of designing command shaping prefilters to provide the overall desired response to pilot command inputs. A previously designed centralized controller is first validated for the integrated airframe/engine plant used. This integrated plant is derived from a different model of the engine subsystem than the one used for the centralized controller design. The centralized controller is then partitioned in a decentralized, hierarchical structure comprising of airframe lateral and longitudinal subcontrollers and an engine subcontroller. Command shaping prefilters from the pilot control effector inputs are then designed and time histories of the closed loop IFPC system response to simulated pilot commands are compared to desired responses based on handling qualities requirements. Finally, the propulsion system safety and nonlinear limited protection logic is wrapped around the engine subcontroller and the response of the closed loop integrated system is evaluated for transients that encounter the propulsion surge margin limit.

  8. SPENVIS Implementation of End-of-Life Solar Cell Calculations Using the Displacement Damage Dose Methodology

    NASA Technical Reports Server (NTRS)

    Walters, Robert; Summers, Geoffrey P.; Warmer. Keffreu J/; Messenger, Scott; Lorentzen, Justin R.; Morton, Thomas; Taylor, Stephen J.; Evans, Hugh; Heynderickx, Daniel; Lei, Fan

    2007-01-01

    This paper presents a method for using the SPENVIS on-line computational suite to implement the displacement damage dose (D(sub d)) methodology for calculating end-of-life (EOL) solar cell performance for a specific space mission. This paper builds on our previous work that has validated the D(sub d) methodology against both measured space data [1,2] and calculations performed using the equivalent fluence methodology developed by NASA JPL [3]. For several years, the space solar community has considered general implementation of the D(sub d) method, but no computer program exists to enable this implementation. In a collaborative effort, NRL, NASA and OAI have produced the Solar Array Verification and Analysis Tool (SAVANT) under NASA funding, but this program has not progressed beyond the beta-stage [4]. The SPENVIS suite with the Multi Layered Shielding Simulation Software (MULASSIS) contains all of the necessary components to implement the Dd methodology in a format complementary to that of SAVANT [5]. NRL is currently working with ESA and BIRA to include the Dd method of solar cell EOL calculations as an integral part of SPENVIS. This paper describes how this can be accomplished.

  9. Assessment of Integrated Pedestrian Protection Systems with Autonomous Emergency Braking (AEB) and Passive Safety Components.

    PubMed

    Edwards, Mervyn; Nathanson, Andrew; Carroll, Jolyon; Wisch, Marcus; Zander, Oliver; Lubbe, Nils

    2015-01-01

    Autonomous emergency braking (AEB) systems fitted to cars for pedestrians have been predicted to offer substantial benefit. On this basis, consumer rating programs-for example, the European New Car Assessment Programme (Euro NCAP)-are developing rating schemes to encourage fitment of these systems. One of the questions that needs to be answered to do this fully is how the assessment of the speed reduction offered by the AEB is integrated with the current assessment of the passive safety for mitigation of pedestrian injury. Ideally, this should be done on a benefit-related basis. The objective of this research was to develop a benefit-based methodology for assessment of integrated pedestrian protection systems with AEB and passive safety components. The method should include weighting procedures to ensure that it represents injury patterns from accident data and replicates an independently estimated benefit of AEB. A methodology has been developed to calculate the expected societal cost of pedestrian injuries, assuming that all pedestrians in the target population (i.e., pedestrians impacted by the front of a passenger car) are impacted by the car being assessed, taking into account the impact speed reduction offered by the car's AEB (if fitted) and the passive safety protection offered by the car's frontal structure. For rating purposes, the cost for the assessed car is normalized by comparing it to the cost calculated for a reference car. The speed reductions measured in AEB tests are used to determine the speed at which each pedestrian in the target population will be impacted. Injury probabilities for each impact are then calculated using the results from Euro NCAP pedestrian impactor tests and injury risk curves. These injury probabilities are converted into cost using "harm"-type costs for the body regions tested. These costs are weighted and summed. Weighting factors were determined using accident data from Germany and Great Britain and an independently estimated AEB benefit. German and Great Britain versions of the methodology are available. The methodology was used to assess cars with good, average, and poor Euro NCAP pedestrian ratings, in combination with a current AEB system. The fitment of a hypothetical A-pillar airbag was also investigated. It was found that the decrease in casualty injury cost achieved by fitting an AEB system was approximately equivalent to that achieved by increasing the passive safety rating from poor to average. Because the assessment was influenced strongly by the level of head protection offered in the scuttle and windscreen area, a hypothetical A-pillar airbag showed high potential to reduce overall casualty cost. A benefit-based methodology for assessment of integrated pedestrian protection systems with AEB has been developed and tested. It uses input from AEB tests and Euro NCAP passive safety tests to give an integrated assessment of the system performance, which includes consideration of effects such as the change in head impact location caused by the impact speed reduction given by the AEB.

  10. A Reexamination of the Emergy Input to a System from the Wind.

    EPA Science Inventory

    The wind energy absorbed in the global boundary layer (GBL, 900 mb surface) is the basis for calculating the wind emergy input for any system on the Earth’s surface. Estimates of the wind emergy input to a system depend on the amount of wind energy dissipated, which can have a ra...

  11. 40 CFR 97.142 - CAIR NOX allowance allocations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... heat input for each year calculated as follows: (A) If the unit is coal-fired during the year, the unit... the first such 5 years. (2)(i) A unit's control period heat input, and a unit's status as coal-fired... Allocations § 97.142 CAIR NOX allowance allocations. (a)(1) The baseline heat input (in mmBtu) used with...

  12. Estimating the costs of psychiatric hospital services at a public health facility in Nigeria.

    PubMed

    Ezenduka, Charles; Ichoku, Hyacinth; Ochonma, Ogbonnia

    2012-09-01

    Information on the cost of mental health services in Africa is very limited even though mental health disorders represent a significant public health concern, in terms of health and economic impact. Cost analysis is important for planning and for efficiency in the provision of hospital services. The study estimated the total and unit costs of psychiatric hospital services to guide policy and psychiatric hospital management efficiency in Nigeria. The study was exploratory and analytical, examining 2008 data. A standard costing methodology based on ingredient approach was adopted combining top-down method with step-down approach to allocate resources (overhead and indirect costs) to the final cost centers. Total and unit cost items related to the treatment of psychiatric patients (including the costs of personnel, overhead and annualised costs of capital items) were identified and measured on the basis of outpatients' visits, inpatients' days and inpatients' admissions. The exercise reflected the input-output process of hospital services where inputs were measured in terms of resource utilisation and output measured by activities carried out at both the outpatient and inpatient departments. In the estimation process total costs were calculated at every cost center/department and divided by a measure of corresponding patient output to produce the average cost per output. This followed a stepwise process of first allocating the direct costs of overhead to the intermediate and final cost centers and from intermediate cost centers to final cost centers for the calculation of total and unit costs. Costs were calculated from the perspective of the healthcare facility, and converted to the US Dollars at the 2008 exchange rate. Personnel constituted the greatest resource input in all departments, averaging 80% of total hospital cost, reflecting the mix of capital and recurrent inputs. Cost per inpatient day, at $56 was equivalent to 1.4 times the cost per outpatient visit at $41, while cost per emergency visit was about two times the cost per outpatient visit. The cost of one psychiatric inpatient admission averaged $3,675, including the costs of drugs and laboratory services, which was equivalent to the cost of 90 outpatients' visits. Cost of drugs was about 4.4% of the total costs and each prescription averaged $7.48. The male ward was the most expensive cost center. Levels of subsidization for inpatient services were over 90% while ancillary services were not subsidized hence full cost recovery. The hospital costs were driven by personnel which reflected the mix of inputs that relied most on technical manpower. The unit cost estimates are significantly higher than the upper limit range for low income countries based on the WHO-CHOICE estimates. Findings suggest a scope for improving efficiency of resource use given the high proportion of fixed costs which indicates excess capacity. Adequate research is needed for effective comparisons and valid assessment of efficiency in psychiatric hospital services in Africa. The unit cost estimates will be useful in making projections for total psychiatric hospital package and a basis for determining the cost of specific neuropsychiatric cases.

  13. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    PubMed

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.

  14. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  15. Development of size reduction equations for calculating power input for grinding pine wood chips using hammer mill

    DOE PAGES

    Naimi, Ladan J.; Collard, Flavien; Bi, Xiaotao; ...

    2016-01-05

    Size reduction is an unavoidable operation for preparing biomass for biofuels and bioproduct conversion. Yet, there is considerable uncertainty in power input requirement and the uniformity of ground biomass. Considerable gains are possible if the required power input for a size reduction ratio is estimated accurately. In this research three well-known mechanistic equations attributed to Rittinger, Kick, and Bond available for predicting energy input for grinding pine wood chips were tested against experimental grinding data. Prior to testing, samples of pine wood chips were conditioned to 11.7% wb, moisture content. The wood chips were successively ground in a hammer millmore » using screen sizes of 25.4 mm, 10 mm, 6.4 mm, and 3.2 mm. The input power and the flow of material into the grinder were recorded continuously. The recorded power input vs. mean particle size showed that the Rittinger equation had the best fit to the experimental data. The ground particle sizes were 4 to 7 times smaller than the size of installed screen. Geometric mean size of particles were calculated using two methods (1) Tyler sieves and using particle size analysis and (2) Sauter mean diameter calculated from the ratio of volume to surface that were estimated from measured length and width. The two mean diameters agreed well, pointing to the fact that either mechanical sieving or particle imaging can be used to characterize particle size. In conclusion, specific energy input to the hammer mill increased from 1.4 kWh t –1 (5.2 J g –1) for large 25.1-mm screen to 25 kWh t –1 (90.4 J g –1) for small 3.2-mm screen.« less

  16. Aeroacoustic Codes for Rotor Harmonic and BVI Noise. CAMRAD.Mod1/HIRES: Methodology and Users' Manual

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Brooks, Thomas F.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1998-01-01

    This document details the methodology and use of the CAMRAD.Mod1/HIRES codes, which were developed at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. CANMAD.Mod1 is a substantially modified version of the performance/trim/wake code CANMAD. High resolution blade loading is determined in post-processing by HIRES and an associated indicial aerodynamics code. Extensive capabilities of importance to noise prediction accuracy are documented, including a new multi-core tip vortex roll-up wake model, higher harmonic and individual blade control, tunnel and fuselage correction input, diagnostic blade motion input, and interfaces for acoustic and CFD aerodynamics codes. Modifications and new code capabilities are documented with examples. A users' job preparation guide and listings of variables and namelists are given.

  17. USGS assessment of water and proppant requirements and water production associated with undiscovered petroleum in the Bakken and Three Forks Formations

    USGS Publications Warehouse

    Haines, Seth S.; Varela, Brian; Hawkins, Sarah J.; Gianoutsos, Nicholas J.; Tennyson, Marilyn E.

    2017-01-01

    The U.S. Geological Survey (USGS) has conducted an assessment of water and proppant requirements, and water production volumes, associated with possible future production of undiscovered petroleum resources in the Bakken and Three Forks Formations, Williston Basin, USA. This water and proppant assessment builds directly from the 2013 USGS petroleum assessment for the Bakken and Three Forks Formations, and it has been conducted using a new water and proppant assessment methodology that builds from the established USGS methodology for assessment of undiscovered petroleum in continuous reservoirs. We determined the assessment input values through extensive analysis of available data on per-well water and proppant use for hydraulic fracturing, including trends over time and space. We determined other assessment inputs through analysis of regional water-production trends.

  18. Sizing the science data processing requirements for EOS

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi

    1991-01-01

    The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.

  19. Microbial Communities Model Parameter Calculation for TSPA/SR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a newmore » qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.« less

  20. Designing insulation for cryogenic ducts

    NASA Astrophysics Data System (ADS)

    Love, C. C.

    1984-03-01

    It is pointed out that the great temperature difference between the outside of a cryogenic duct and the liquified gas it carries can cause a high heat input unless blocked by a high thermal resistance. High thermal resistance for lines needing maximum insulation is provided by metal vacuum jackets. Low-density foam is satisfactory in cases in which higher heat input can be tolerated. Attention is given to the heat transfer through a duct vacuum jacket, the calculation of heat input and the exterior surface's steady-state temperature for various thicknesses of insulation, the calculation of the heat transfer through gimbal jackets, and design specifications regarding the allowable pressure rise in the jacket's annular space.

  1. Computational thermochemistry: Automated generation of scale factors for vibrational frequencies calculated by electronic structure model chemistries

    NASA Astrophysics Data System (ADS)

    Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.

    2017-01-01

    We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.

  2. Pitot tube calculations with a TI-59

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, K.

    Industrial plant and stack analysis dictates that flow measurements in ducts be accurate. This is usually accomplished by running a traverse with a pitot tube across the duct or flue. A traverse is a series of measurements taken at predetermined points across the duct. The values of these measurements are calculated into point flow rates and averaged. A program for the Texas Instruments TI-59 programmable calculator follows. The program will perform calculations for an infinite number of test points, both with the standard (combined impact type) pitot tube and the S-type (combined reverse type). The type of tube is selectedmore » by inputting an indicating valve that triggers a flag in the program. To use the standard pitot tube, a 1 is input into key E. When the S-type is used, a zero is input into key E. The program output will note if the S-type had been used. Since most process systems are not at standard conditions (32/sup 0/F, 1 atm) the program will take this into account.« less

  3. Monte Carlo Calculation of Thermal Neutron Inelastic Scattering Cross Section Uncertainties by Sampling Perturbed Phonon Spectra

    NASA Astrophysics Data System (ADS)

    Holmes, Jesse Curtis

    Nuclear data libraries provide fundamental reaction information required by nuclear system simulation codes. The inclusion of data covariances in these libraries allows the user to assess uncertainties in system response parameters as a function of uncertainties in the nuclear data. Formats and procedures are currently established for representing covariances for various types of reaction data in ENDF libraries. This covariance data is typically generated utilizing experimental measurements and empirical models, consistent with the method of parent data production. However, ENDF File 7 thermal neutron scattering library data is, by convention, produced theoretically through fundamental scattering physics model calculations. Currently, there is no published covariance data for ENDF File 7 thermal libraries. Furthermore, no accepted methodology exists for quantifying or representing uncertainty information associated with this thermal library data. The quality of thermal neutron inelastic scattering cross section data can be of high importance in reactor analysis and criticality safety applications. These cross sections depend on the material's structure and dynamics. The double-differential scattering law, S(alpha, beta), tabulated in ENDF File 7 libraries contains this information. For crystalline solids, S(alpha, beta) is primarily a function of the material's phonon density of states (DOS). Published ENDF File 7 libraries are commonly produced by calculation and processing codes, such as the LEAPR module of NJOY, which utilize the phonon DOS as the fundamental input for inelastic scattering calculations to directly output an S(alpha, beta) matrix. To determine covariances for the S(alpha, beta) data generated by this process, information about uncertainties in the DOS is required. The phonon DOS may be viewed as a probability density function of atomic vibrational energy states that exist in a material. Probable variation in the shape of this spectrum may be established that depends on uncertainties in the physics models and methodology employed to produce the DOS. Through Monte Carlo sampling of perturbations from the reference phonon spectrum, an S(alpha, beta) covariance matrix may be generated. In this work, density functional theory and lattice dynamics in the harmonic approximation are used to calculate the phonon DOS for hexagonal crystalline graphite. This form of graphite is used as an example material for the purpose of demonstrating procedures for analyzing, calculating and processing thermal neutron inelastic scattering uncertainty information. Several sources of uncertainty in thermal neutron inelastic scattering calculations are examined, including sources which cannot be directly characterized through a description of the phonon DOS uncertainty, and their impacts are evaluated. Covariances for hexagonal crystalline graphite S(alpha, beta) data are quantified by coupling the standard methodology of LEAPR with a Monte Carlo sampling process. The mechanics of efficiently representing and processing this covariance information is also examined. Finally, with appropriate sensitivity information, it is shown that an S(alpha, beta) covariance matrix can be propagated to generate covariance data for integrated cross sections, secondary energy distributions, and coupled energy-angle distributions. This approach enables a complete description of thermal neutron inelastic scattering cross section uncertainties which may be employed to improve the simulation of nuclear systems.

  4. Localized Elf Propagation Anomalies.

    DTIC Science & Technology

    1985-06-01

    the above three SPEs. Those important auxiliary data are used as inputs to air- chemistry codes to calculate the electron and ion density height...horizontal magnetic intensity H is H A ( ATAR ) 1 / 2 (9 d)- 1/2 exp(-’afL- d) e- d/ 8 " 7 coso A/m, (1) where A depends on the antenna moment, frequency...those rates are input to air- chemistry equations to obtain height profiles of electron and ion densities. Calculation of ion-pair production rates

  5. Comparison of calculation methods for estimating annual carbon stock change in German forests under forest management in the German greenhouse gas inventory.

    PubMed

    Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes

    2016-12-01

    The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.

  6. Notice of Data Availability for Federal Implementation Plans To Reduce Interstate Transport of Fine Particulate Matter and Ozone: Request for Comment (76 FR 1109)

    EPA Pesticide Factsheets

    This NODA requests public comment on two alternative allocation methodologies for existing units, on the unit-level allocations calculated using those alternative methodologies, on the data supporting the calculations, and on any resulting implications.

  7. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  8. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  9. 40 CFR Table Nn-1 to Subpart Hh of... - Default Factors for Calculation Methodology 1 of This Subpart

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Calculation Methodology 1 of This Subpart Fuel Default high heating value factor Default CO2 emission factor (kg CO2/MMBtu) Natural Gas 1.028 MMBtu/Mscf 53.02 Propane 3.822 MMBtu/bbl 61.46 Normal butane 4.242...

  10. Antitheft container for instruments

    NASA Technical Reports Server (NTRS)

    Kerley, J. J., Jr.

    1979-01-01

    Antitheft container is used to prevent theft of calculators, portable computers, and other small instruments. Container design is simple and flexible enough to allow easy access to display or input systems of instruments, while not interfering with power input to device.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryan, Charles R.; Weck, Philippe F.; Vaughn, Palmer

    Report RWEV-REP-001, Analysis of Postclosure Groundwater Impacts for a Geologic Repository for the Disposal of Spent Nuclear Fuel and High Level Radioactive Waste at Yucca Mountain, Nye County, Nevada was issued by the DOE in 2009 and is currently being updated. Sandia National Laboratories (SNL) provided support for the original document, performing calculations and extracting data from the Yucca Mountain Performance Assessment Model that were used as inputs to the contaminant transport and dose calculations by Jason Associates Corporation, the primary developers of the DOE report. The inputs from SNL were documented in LSA-AR-037, Inputs to Jason Associates Corporation inmore » Support of the Postclosure Repository Supplemental Environmental Impact Statement. To support the updating of the original Groundwater Impacts document, SNL has reviewed the inputs provided in LSA-AR-037 to verify that they are current and appropriate for use. The results of that assessment are documented here.« less

  12. Multiple-Input Subject-Specific Modeling of Plasma Glucose Concentration for Feedforward Control.

    PubMed

    Kotz, Kaylee; Cinar, Ali; Mei, Yong; Roggendorf, Amy; Littlejohn, Elizabeth; Quinn, Laurie; Rollins, Derrick K

    2014-11-26

    The ability to accurately develop subject-specific, input causation models, for blood glucose concentration (BGC) for large input sets can have a significant impact on tightening control for insulin dependent diabetes. More specifically, for Type 1 diabetics (T1Ds), it can lead to an effective artificial pancreas (i.e., an automatic control system that delivers exogenous insulin) under extreme changes in critical disturbances. These disturbances include food consumption, activity variations, and physiological stress changes. Thus, this paper presents a free-living, outpatient, multiple-input, modeling method for BGC with strong causation attributes that is stable and guards against overfitting to provide an effective modeling approach for feedforward control (FFC). This approach is a Wiener block-oriented methodology, which has unique attributes for meeting critical requirements for effective, long-term, FFC.

  13. IJS procedure for RELAP5 to TRACE input model conversion using SNAP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prosek, A.; Berar, O. A.

    2012-07-01

    The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less

  14. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  15. An approach for delineating drinking water wellhead protection areas at the Nile Delta, Egypt.

    PubMed

    Fadlelmawla, Amr A; Dawoud, Mohamed A

    2006-04-01

    In Egypt, production has a high priority. To this end protecting the quality of the groundwater, specifically when used for drinking water, and delineating protection areas around the drinking water wellheads for strict landuse restrictions is essential. The delineation methods are numerous; nonetheless, the uniqueness of the hydrogeological, institutional as well as social conditions in the Nile Delta region dictate a customized approach. The analysis of the hydrological conditions and land ownership at the Nile Delta indicates the need for an accurate methodology. On the other hand, attempting to calculate the wellhead protected areas around each of the drinking wells (more than 1500) requires data, human resources, and time that exceed the capabilities of the groundwater management agency. Accordingly, a combination of two methods (simplified variable shapes and numerical modeling) was adopted. Sensitivity analyses carried out using hypothetical modeling conditions have identified the pumping rate, clay thickness, hydraulic gradient, vertical conductivity of the clay, and the hydraulic conductivity as the most significant parameters in determining the dimensions of the wellhead protection areas (WHPAs). Tables of sets of WHPAs dimensions were calculated using synthetic modeling conditions representing the most common ranges of the significant parameters. Specific WHPA dimensions can be calculated by interpolation, utilizing the produced tables along with the operational and hydrogeological conditions for the well under consideration. In order to simplify the interpolation of the appropriate dimensions of the WHPAs from the calculated tables, an interactive computer program was written. The program accepts the real time data of the significant parameters as its input, and gives the appropriate WHPAs dimensions as its output.

  16. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  17. Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992 - 2009

    NASA Technical Reports Server (NTRS)

    Zwally, H. Jay; Giovinetto, Mario B.

    2011-01-01

    Published mass balance estimates for the Antarctic Ice Sheet (AIS) lie between approximately +50 to -250 Gt/year for 1992 to 2009, which span a range equivalent to 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar-altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (+28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. Although recent reports of large and accelerating rates of mass loss from GRACE=based studies cite agreement with IOM results, our evaluation does not support that conclusion. We find that the extrapolation used in the published IOM estimates for the 15 % of the periphery for which discharge velocities are not observed gives twice the rate of discharge per unit of associated ice-sheet area than the 85% faster-moving parts. Our calculations show that the published extrapolation overestimates the ice discharge by 282 Gt/yr compared to our assumption that the slower moving areas have 70% as much discharge per area as the faster moving parts. Also, published data on the time-series of discharge velocities and accumulation/precipitation do not support mass output increases or input decreases with time, respectively. Our modified IOM estimate, using the 70% discharge assumption and substituting input from a field-data compilation for input from an atmospheric model over 6% of area, gives a loss of only 13 Gt/year (versus 136 Gt/year) for the period around 2000. Two ERS-based estimates, our modified IOM, and a GRACE-based estimate for observations within 1992 to 2005 lie in a narrowed range of +27 to - 40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992-2001 is - 47 Gt/year for West Antarctica, + 16 Gt/year for East Antarctica, and -31 Gt/year overall (+0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07 % of the AIS area)

  18. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  19. Sources of oxygen flux in groundwater during induced bank filtration at a site in Berlin, Germany

    NASA Astrophysics Data System (ADS)

    Kohfahl, Claus; Massmann, Gudrun; Pekdeger, Asaf

    2009-05-01

    The microbial degradation of pharmaceuticals found in surface water used for artificial recharge is strongly dependent on redox conditions of the subsurface. Furthermore the durability of production wells may decrease considerably with the presence of oxygen and ferrous iron due to the precipitation of trivalent iron oxides and subsequent clogging. Field measurements are presented for oxygen at a bank filtration site in Berlin, Germany, along with simplified calculations of different oxygen pathways into the groundwater. For a two-dimensional vertical cross-section, oxygen input has been calculated for six scenarios related to different water management strategies. Calculations were carried out in order to assess the amount of oxygen input due to (1) the infiltration of oxic lake water, (2) air entrapment as a result of water table oscillations, (3) diffusive oxygen flux from soil air and (4) infiltrating rainwater. The results show that air entrapment and infiltrating lake water during winter constitute by far the most important mechanism of oxygen input. Oxygen input by percolating rainwater and by diffusive delivery of oxygen in the gas phase is negligible. The results exemplify the importance of well management as a determining factor for water oscillations and redox conditions during artificial recharge.

  20. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  1. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  2. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  3. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  4. 19 CFR 351.407 - Calculation of constructed value and cost of production.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... production. (See section 773(f) of the Act.) (b) Determination of value under the major input rule. For purposes of section 773(f)(3) of the Act, the Secretary normally will determine the value of a major input... to the affiliated person for the major input; (2) The amount usually reflected in sales of the major...

  5. Sensitivity of potential evapotranspiration and simulated flow to varying meteorological inputs, Salt Creek watershed, DuPage County, Illinois

    USGS Publications Warehouse

    Whitbeck, David E.

    2006-01-01

    The Lamoreux Potential Evapotranspiration (LXPET) Program computes potential evapotranspiration (PET) using inputs from four different meteorological sources: temperature, dewpoint, wind speed, and solar radiation. PET and the same four meteorological inputs are used with precipitation data in the Hydrological Simulation Program-Fortran (HSPF) to simulate streamflow in the Salt Creek watershed, DuPage County, Illinois. Streamflows from HSPF are routed with the Full Equations (FEQ) model to determine water-surface elevations. Consequently, variations in meteorological inputs have potential to propagate through many calculations. Sensitivity of PET to variation was simulated by increasing the meteorological input values by 20, 40, and 60 percent and evaluating the change in the calculated PET. Increases in temperatures produced the greatest percent changes, followed by increases in solar radiation, dewpoint, and then wind speed. Additional sensitivity of PET was considered for shifts in input temperatures and dewpoints by absolute differences of ?10, ?20, and ?30 degrees Fahrenheit (degF). Again, changes in input temperatures produced the greatest differences in PET. Sensitivity of streamflow simulated by HSPF was evaluated for 20-percent increases in meteorological inputs. These simulations showed that increases in temperature produced the greatest change in flow. Finally, peak water-surface elevations for nine storm events were compared among unmodified meteorological inputs and inputs with values predicted 6, 24, and 48 hours preceding the simulated peak. Results of this study can be applied to determine how errors specific to a hydrologic system will affect computations of system streamflow and water-surface elevations.

  6. Ethical Dilemmas for the Computational Linguist in the Business World.

    ERIC Educational Resources Information Center

    McCallum-Bayliss, Heather

    1993-01-01

    Reports on a computer application in which collaboration did not precede project design. Important project parameters established without author input presented ethical dilemmas in balancing contract obligations and methodological rigor. (Author/CK)

  7. Satellite Vibration Testing: Angle optimisation method to Reduce Overtesting

    NASA Astrophysics Data System (ADS)

    Knight, Charly; Remedia, Marcello; Aglietti, Guglielmo S.; Richardson, Guy

    2018-06-01

    Spacecraft overtesting is a long running problem, and the main focus of most attempts to reduce it has been to adjust the base vibration input (i.e. notching). Instead this paper examines testing alternatives for secondary structures (equipment) coupled to the main structure (satellite) when they are tested separately. Even if the vibration source is applied along one of the orthogonal axes at the base of the coupled system (satellite plus equipment), the dynamics of the system and potentially the interface configuration mean the vibration at the interface may not occur all along one axis much less the corresponding orthogonal axis of the base excitation. This paper proposes an alternative testing methodology in which the testing of a piece of equipment occurs at an offset angle. This Angle Optimisation method may have multiple tests but each with an altered input direction allowing for the best match between all specified equipment system responses with coupled system tests. An optimisation process that compares the calculated equipment RMS values for a range of inputs with the maximum coupled system RMS values, and is used to find the optimal testing configuration for the given parameters. A case study was performed to find the best testing angles to match the acceleration responses of the centre of mass and sum of interface forces for all three axes, as well as the von Mises stress for an element by a fastening point. The angle optimisation method resulted in RMS values and PSD responses that were much closer to the coupled system when compared with traditional testing. The optimum testing configuration resulted in an overall average error significantly smaller than the traditional method. Crucially, this case study shows that the optimum test campaign could be a single equipment level test opposed to the traditional three orthogonal direction tests.

  8. Advances in Estimating Methane Emissions from Enteric Fermentation

    NASA Astrophysics Data System (ADS)

    Kebreab, E.; Appuhamy, R.

    2016-12-01

    Methane from enteric fermentation of livestock is the largest contributor to the agricultural GHG emissions. The quantification of methane emissions from livestock on a global scale relies on prediction models because measurements require specialized equipment and may be expensive. Most countries use a fixed number (kg methane/year) or calculate as a proportion of energy intake to estimate enteric methane emissions in national inventories. However, diet composition significantly regulates enteric methane production in addition to total feed intake and thus the main target in formulating mitigation options. The two current methodologies are not able to assess mitigation options, therefore, new estimation methods are required that can take feed composition into account. The availability of information on livestock production systems has increased substantially enabling the development of more detailed methane prediction models. Limited number of process-based models have been developed that represent biological relationships in methane production, however, these require extensive inputs and specialized software that may not be easily available. Empirical models may provide a better alternative in practical situations due to less input requirements. Several models have been developed in the last 10 years but none of them work equally well across all regions of the world. The more successful models particularly in North America require three major inputs: feed (or energy) intake, fiber and fat concentration of the diet. Given the significant variability of emissions within regions, models that are able to capture regional variability of feed intake and diet composition perform the best in model evaluation with independent data. The utilization of such models may reduce uncertainties associated with prediction of methane emissions and allow a better examination and representation of policies regulating emissions from cattle.

  9. Life Cycle Assessment of Mixed Municipal Solid Waste: Multi-input versus multi-output perspective.

    PubMed

    Fiorentino, G; Ripa, M; Protano, G; Hornsby, C; Ulgiati, S

    2015-12-01

    This paper analyses four strategies for managing the Mixed Municipal Solid Waste (MMSW) in terms of their environmental impacts and potential advantages by means of Life Cycle Assessment (LCA) methodology. To this aim, both a multi-input and a multi-output approach are applied to evaluate the effect of these perspectives on selected impact categories. The analyzed management options include direct landfilling with energy recovery (S-1), Mechanical-Biological Treatment (MBT) followed by Waste-to-Energy (WtE) conversion (S-2), a combination of an innovative MBT/MARSS (Material Advanced Recovery Sustainable Systems) process and landfill disposal (S-3), and finally a combination of the MBT/MARSS process with WtE conversion (S-4). The MARSS technology, developed within an European LIFE PLUS framework and currently implemented at pilot plant scale, is an innovative MBT plant having the main goal to yield a Renewable Refined Biomass Fuel (RRBF) to be used for combined heat and power production (CHP) under the regulations enforced for biomass-based plants instead of Waste-to-Energy systems, for increased environmental performance. The four scenarios are characterized by different resource investment for plant and infrastructure construction and different quantities of matter, heat and electricity recovery and recycling. Results, calculated per unit mass of waste treated and per unit exergy delivered, under both multi-input and multi-output LCA perspectives, point out improved performance for scenarios characterized by increased matter and energy recovery. Although none of the investigated scenarios is capable to provide the best performance in all the analyzed impact categories, the scenario S-4 shows the best LCA results in the human toxicity and freshwater eutrophication categories, i.e. the ones with highest impacts in all waste management processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less

  11. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  12. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  13. NEWTONP - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.

  14. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  15. A Reexamination of the Emergy Input to a System from the ...

    EPA Pesticide Factsheets

    The wind energy absorbed in the global boundary layer (GBL, 900 mb surface) is the basis for calculating the wind emergy input for any system on the Earth’s surface. Estimates of the wind emergy input to a system depend on the amount of wind energy dissipated, which can have a range of magnitudes for a given velocity depending on surface drag and atmospheric stability at the location and time period under study. In this study, we develop a method to consider this complexity in estimating the emergy input to a system from the wind. A new calculation of the transformity of the wind energy dissipated in the GBL (900 mb surface) based on general models of atmospheric circulation in the planetary boundary layer (PBL, 100 mb surface) is presented and expressed on the 12.0E+24 seJ y-1 geobiosphere baseline to complete the information needed to calculate the emergy input from the wind to the GBL of any system. The average transformity of wind energy dissipated in the GBL (below 900 mb) was 1241±650 sej J-1. The analysis showed that the transformity of the wind varies over the course of a year such that summer processes may require a different wind transformity than processes occurring with a winter or annual time boundary. This is a paper in the proceedings of Emergy Synthesis 9, thus it will be available online for those interested in this subject. The paper describes a new and more accurate way to estimate the wind energy input to any system. It also has a new cal

  16. Modular Exposure Disaggregation Methodologies for Catastrophe Modelling using GIS and Remotely-Sensed Data

    NASA Astrophysics Data System (ADS)

    Foulser-Piggott, R.; Saito, K.; Spence, R.

    2012-04-01

    Loss estimates produced by catastrophe models are dependent on the quality of the input data, including both the hazard and exposure data. Currently, some of the exposure data input into a catastrophe model is aggregated over an area and therefore an estimate of the risk in this area may have a low level of accuracy. In order to obtain a more detailed and accurate loss estimate, it is necessary to have higher resolution exposure data. However, high resolution exposure data is not commonly available worldwide and therefore methods to infer building distribution and characteristics at higher resolution from existing information must be developed. This study is focussed on the development of disaggregation methodologies for exposure data which, if implemented in current catastrophe models, would lead to improved loss estimates. The new methodologies developed for disaggregating exposure data make use of GIS, remote sensing and statistical techniques. The main focus of this study is on earthquake risk, however the methods developed are modular so that they may be applied to different hazards. A number of different methods are proposed in order to be applicable to different regions of the world which have different amounts of data available. The new methods give estimates of both the number of buildings in a study area and a distribution of building typologies, as well as a measure of the vulnerability of the building stock to hazard. For each method, a way to assess and quantify the uncertainties in the methods and results is proposed, with particular focus on developing an index to enable input data quality to be compared. The applicability of the methods is demonstrated through testing for two study areas, one in Japan and the second in Turkey, selected because of the occurrence of recent and damaging earthquake events. The testing procedure is to use the proposed methods to estimate the number of buildings damaged at different levels following a scenario earthquake event. This enables the results of the models to be compared with real data and the relative performance of the different methodologies to be evaluated. A sensitivity analysis is also conducted for two main reasons. Firstly, to determine the key input variables in the methodology that have the most significant impact on the resulting loss estimate. Secondly, to enable the uncertainty in the different approaches to be quantified and therefore provide a range of uncertainty in the loss estimates.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jing-Jy; Flood, Paul E.; LePoire, David

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less

  18. Propellant Mass Fraction Calculation Methodology for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Monk, Timothy S.

    2009-01-01

    Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.

  19. The springs of Lake Pátzcuaro: chemistry, salt-balance, and implications for the water balance of the lake

    USGS Publications Warehouse

    Bischoff, James L.; Israde-Alcántara, Isabel; Garduno-Monroy, Victor H.; Shanks, Wayne C.

    2004-01-01

    Lake Pa??tzcuaro, the center of the ancient Tarascan civilization located in the Mexican altiplano west of the city of Morelia, has neither river input nor outflow. The relatively constant lake-salinity over the past centuries indicates the lake is in chemical steady state. Springs of the south shore constitute the primary visible input to the lake, so influx and discharge must be via sub-lacustrine ground water. The authors report on the chemistry and stable isotope composition of the springs, deeming them representative of ground-water input. The springs are dominated by Ca, Mg and Na, whereas the lake is dominated by Na. Combining these results with previously published precipitation/rainfall measurements on the lake, the authors calculate the chemical evolution from spring water to lake water, and also calculate a salt balance of the ground-water-lake system. Comparing Cl and ??18O compositions in the springs and lake water indicates that 75-80% of the spring water is lost evaporatively during evolution toward lake composition. During evaporation Ca and Mg are lost from the water by carbonate precipitation. Each liter of spring water discharging into the lake precipitates about 18.7 mg of CaCO3. Salt balance calculations indicate that ground water input to the lake is 85.9??106 m3/a and ground water discharge from the lake is 23.0??106 m3/a. Thus, the discharge is about 27% of the input, with the rest balanced by evaporation. A calculation of time to reach steady-state ab initio indicates that the Cl concentration of the present day lake would be reached in about 150 a. ?? 2004 Elsevier Ltd. All rights reserved.

  20. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  1. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  2. 78 FR 25116 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Liquidity Factor of CME's CDS Margin Methodology April 23, 2013. Pursuant to Section 19(b)(1) of the... additions; bracketed text indicates deletions. * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the sum of two components: (1) A concentration charge...

  3. 77 FR 77160 - Self-Regulatory Organizations; Chicago Mercantile Exchange Inc.; Notice of Filing of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... Liquidity Factor of CME's CDS Margin Methodology December 21, 2012. Pursuant to Section 19(b)(1) of the.... * * * * * CME CDS Liquidity Margin Factor Calculation Methodology The Liquidity Factor will be calculated as the... Liquidity Factor using the current Gross Notional Function with the following modifications: (1) the...

  4. Wood texture classification by fuzzy neural networks

    NASA Astrophysics Data System (ADS)

    Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.

    1999-03-01

    The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Jason; Winkler, Jon

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  6. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  7. A nephron-based model of the kidneys for macro-to-micro α-particle dosimetry

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Huso, David L.; Sundel, Margaret H.; Sgouros, George

    2012-07-01

    Targeted α-particle therapy is a promising treatment modality for cancer. Due to the short path-length of α-particles, the potential efficacy and toxicity of these agents is best evaluated by microscale dosimetry calculations instead of whole-organ, absorbed fraction-based dosimetry. Yet time-integrated activity (TIA), the necessary input for dosimetry, can still only be quantified reliably at the organ or macroscopic level. We describe a nephron- and cellular-based kidney dosimetry model for α-particle radiopharmaceutical therapy, more suited to the short range and high linear energy transfer of α-particle emitters, which takes as input kidney or cortex TIA and through a macro to micro model-based methodology assigns TIA to micro-level kidney substructures. We apply a geometrical model to provide nephron-level S-values for a range of isotopes allowing for pre-clinical and clinical applications according to the medical internal radiation dosimetry (MIRD) schema. We assume that the relationship between whole-organ TIA and TIA apportioned to microscale substructures as measured in an appropriate pre-clinical mammalian model also applies to the human. In both, the pre-clinical and the human model, microscale substructures are described as a collection of simple geometrical shapes akin to those used in the Cristy-Eckerman phantoms for normal organs. Anatomical parameters are taken from the literature for a human model, while murine parameters are measured ex vivo. The murine histological slides also provide the data for volume of occupancy of the different compartments of the nephron in the kidney: glomerulus versus proximal tubule versus distal tubule. Monte Carlo simulations are run with activity placed in the different nephron compartments for several α-particle emitters currently under investigation in radiopharmaceutical therapy. The S-values were calculated for the α-emitters and their descendants between the different nephron compartments for both the human and murine models. The renal cortex and medulla S-values were also calculated and the results compared to traditional absorbed fraction calculations. The nephron model enables a more optimal implementation of treatment and is a critical step in understanding toxicity for human translation of targeted α-particle therapy. The S-values established here will enable a MIRD-type application of α-particle dosimetry for α-emitters, i.e. measuring the TIA in the kidney (or renal cortex) will provide meaningful and accurate nephron-level dosimetry.

  8. Real time flight simulation methodology

    NASA Technical Reports Server (NTRS)

    Parrish, E. A.; Cook, G.; Mcvey, E. S.

    1977-01-01

    Substitutional methods for digitization, input signal-dependent integrator approximations, and digital autopilot design were developed. The software framework of a simulator design package is described. Included are subroutines for iterative designs of simulation models and a rudimentary graphics package.

  9. Models of Educational Attainment: A Theoretical and Methodological Critique

    ERIC Educational Resources Information Center

    Byrne, D. S.; And Others

    1973-01-01

    Uses cluster analysis techniques to show that egalitarian policies in secondary education coupled with high financial inputs have measurable payoffs in higher attainment rates, based on Max Weber's notion of power'' within a community. (Author/JM)

  10. Establishing survey validity and reliability for American Indians through "think aloud" and test-retest methods.

    PubMed

    Hauge, Cindy Horst; Jacobs-Knight, Jacque; Jensen, Jamie L; Burgess, Katherine M; Puumala, Susan E; Wilton, Georgiana; Hanson, Jessica D

    2015-06-01

    The purpose of this study was to use a mixed-methods approach to determine the validity and reliability of measurements used within an alcohol-exposed pregnancy prevention program for American Indian women. To develop validity, content experts provided input into the survey measures, and a "think aloud" methodology was conducted with 23 American Indian women. After revising the measurements based on this input, a test-retest was conducted with 79 American Indian women who were randomized to complete either the original measurements or the new, modified measurements. The test-retest revealed that some of the questions performed better for the modified version, whereas others appeared to be more reliable for the original version. The mixed-methods approach was a useful methodology for gathering feedback on survey measurements from American Indian participants and in indicating specific survey questions that needed to be modified for this population. © The Author(s) 2015.

  11. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  12. Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak

    2018-03-01

    The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.

  13. [Inventory of regional surface nutrient balance and policy recommendations in China].

    PubMed

    Chen, Min-Peng; Chen, Ji-Ning

    2007-06-01

    By applying OECD surface soil nitrogen balance methodology, the framework, methodology and database for nutrient balance budget in China are established to evaluate the impact of nutrient balance on agricultural production and water environment. Results show that nitrogen and phosphorus surplus in China are 640 x 10(4) t and 98 x 10(4) t respectively, and nitrogen and phosphorus surplus intensity in China are 16.56 kg/hm2 and 2.53 kg/hm2 respectively. Because of striking spatial difference of nutrient balance across the country, China is seeing a dual-challenge of nutrient surplus management as well as nutrient deficit management. Chemical fertilizer and livestock manure are best targets to perform nutrient surplus management due to their marked contributions to nutrient input. However, it is not cost-effective to implement a uniform management for all regions since nutrient input structures of them vary considerably.

  14. DiffPy-CMI-Python libraries for Complex Modeling Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billinge, Simon; Juhas, Pavol; Farrow, Christopher

    2014-02-01

    Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.

  15. ANL/RBC: A computer code for the analysis of Rankine bottoming cycles, including system cost evaluation and off-design performance

    NASA Technical Reports Server (NTRS)

    Mclennan, G. A.

    1986-01-01

    This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.

  16. Adaptive Architectures for Effects Based Operations

    DTIC Science & Technology

    2006-08-12

    laLb c d elfl I A IB Ic d e f Parent 2 Figure 3: One-Point Crossover System Architectures Lab 85 Aug-06 6.4. ECAD -EA Methodology The previous two...that accomplishes this task is termed as ECAD -EA (Effective Courses of Action Determination Using Evolutionary Algorithms). Besides a completely...items are given below followed by their explanations, while Figure 4 shows the inputs and outputs of the ECAD -EA methodology in the form of a block

  17. Space system operations and support cost analysis using Markov chains

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.

    1990-01-01

    This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.

  18. 40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...

  19. 40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...

  20. 40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...

  1. 40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...

  2. 40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...

  3. Rationale choosing interval of a piecewise-constant approximation of input rate of non-stationary queue system

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-01-01

    The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.

  4. Application of the DG-1199 methodology to the ESBWR and ABWR.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinich, Donald A.; Gauntt, Randall O.; Walton, Fotini

    2010-09-01

    Appendix A-5 of Draft Regulatory Guide DG-1199 'Alternative Radiological Source Term for Evaluating Design Basis Accidents at Nuclear Power Reactors' provides guidance - applicable to RADTRAD MSIV leakage models - for scaling containment aerosol concentration to the expected steam dome concentration in order to preserve the simplified use of the Accident Source Term (AST) in assessing containment performance under assumed design basis accident (DBA) conditions. In this study Economic and Safe Boiling Water Reactor (ESBWR) and Advanced Boiling Water Reactor (ABWR) RADTRAD models are developed using the DG-1199, Appendix A-5 guidance. The models were run using RADTRAD v3.03. Low Populationmore » Zone (LPZ), control room (CR), and worst-case 2-hr Exclusion Area Boundary (EAB) doses were calculated and compared to the relevant accident dose criteria in 10 CFR 50.67. For the ESBWR, the dose results were all lower than the MSIV leakage doses calculated by General Electric/Hitachi (GEH) in their licensing technical report. There are no comparable ABWR MSIV leakage doses, however, it should be noted that the ABWR doses are lower than the ESBWR doses. In addition, sensitivity cases were evaluated to ascertain the influence/importance of key input parameters/features of the models.« less

  5. Using Participatory Approach to Improve Availability of Spatial Data for Local Government

    NASA Astrophysics Data System (ADS)

    Kliment, T.; Cetl, V.; Tomič, H.; Lisiak, J.; Kliment, M.

    2016-09-01

    Nowadays, the availability of authoritative geospatial features of various data themes is becoming wider on global, regional and national levels. The reason is existence of legislative frameworks for public sector information and related spatial data infrastructure implementations, emergence of support for initiatives as open data, big data ensuring that online geospatial information are made available to digital single market, entrepreneurs and public bodies on both national and local level. However, the availability of authoritative reference spatial data linking the geographic representation of the properties and their owners are still missing in an appropriate quantity and quality level, even though this data represent fundamental input for local governments regarding the register of buildings used for property tax calculations, identification of illegal buildings, etc. We propose a methodology to improve this situation by applying the principles of participatory GIS and VGI used to collect observations, update authoritative datasets and verify the newly developed datasets of areas of buildings used to calculate property tax rates issued to their owners. The case study was performed within the district of the City of Požega in eastern Croatia in the summer 2015 and resulted in a total number of 16072 updated and newly identified objects made available online for quality verification by citizens using open source geospatial technologies.

  6. Multiagency Urban Search Experiment Detector and Algorithm Test Bed

    NASA Astrophysics Data System (ADS)

    Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.

    2017-07-01

    In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.

  7. Investigation of Patient-Specific Cerebral Aneurysm using Volumetric PIV, CFD, and In Vitro PC-MRI

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Dickerhoff, Ben; Saloner, David; Rayz, Vitaliy; Vlachos, Pavlos

    2017-11-01

    4D PC-MRI is a modality capable of providing time-resolved velocity fields in cerebral aneurysms in vivo. The MRI-measured velocities and subsequent hemodynamic parameters such as wall shear stress, and oscillatory shear index, can help neurosurgeons decide a course of treatment for a patient, e.g. whether to treat or monitor the aneurysm. However, low spatiotemporal resolution, limited velocity dynamic range, and inherent noise of PC-MRI velocity fields can have a notable effect on subsequent calculations, and should be investigated. In this work, we compare velocity fields obtained with 4D PC-MRI, computational fluid dynamics (CFD) and volumetric particle image velocimetry (PIV), using a patient-specific model of a basilar tip aneurysm. The same in vitro model is used for all three modalities and flow input parameters are controlled. In vivo, PC-MRI data was also acquired for this patient and used for comparison. Specifically, we investigate differences in the resulting velocity fields and biases in subsequent calculations. Further, we explore the effect these errors may have on assessment of the aneurysm progression and seek to develop corrective algorithms and other methodologies that can be used to improve the accuracy of hemodynamic analysis in clinical setting.

  8. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  9. Computational designing and screening of solid materials for CO2capture

    NASA Astrophysics Data System (ADS)

    Duan, Yuhua

    In this presentation, we will update our progress on computational designing and screening of solid materials for CO2 capture. By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates from the vast array of possible solid materials have been proposed and validated at NETL. The advantage of this method is that it identifies the thermodynamic properties of the CO2 capture reaction as a function of temperature and pressure without any experimental input beyond crystallographic structural information of the solid phases involved. The calculated thermodynamic properties of different classes of solid materials versus temperature and pressure changes were further used to evaluate the equilibrium properties for the CO2 adsorption/desorption cycles. According to the requirements imposed by the pre- and post- combustion technologies and based on our calculated thermodynamic properties for the CO2 capture reactions by the solids of interest, we were able to identify only those solid materials for which lower capture energy costs are expected at the desired working conditions. In addition, we present a simulation scheme to increase and decrease the turnover temperature (Tt) of solid capturing CO2 reaction by mixing other solids. Our results also show that some solid sorbents can serve as bi-functional materials: CO2 sorbent and CO oxidation catalyst. Such dual functionality could be used for removing both CO and CO2 after water-gas-shift to obtain pure H2.

  10. Consistency Between SC#21REF Solar XUV Energy Input and the 1973 Pioneer 10 Observations of the Jovian Photoelectron Excited H2 Airglow

    NASA Technical Reports Server (NTRS)

    Gangopadhyay, P.; Ogawa, H. S.; Judge, D. L.

    1988-01-01

    It has been suggested in the literature that the F74113 solar spectrum for the solar minimum condition needs to be modified to explain the production of photoelectrons in the Earth's atmosphere. We have studied here the effect of another solar minimum spectrum, SC#21REF, on the Jovian upper atmosphere emissions and we have compared the predicted photoelectron excited H2 airglow with the 1973 Pioneer 10 observations, analyzed according to the methodology of Shemansky and Judge (1988). In this model calculation we find that in 1973, the Jovian H2 band emissions can be accounted for almost entirely by photoelectron excitation, if the preflight calibration of the Pioneer 10 ultraviolet photometer is adopted. If the SC#21REF flux shortward of 250 A is multiplied by 2 as proposed by Richards and Torr (1988) then the Pioneer 10 calibration and/or the airglow model used must be modified in order to have a self consistent set of observations.

  11. Specialized minimal PDFs for optimized LHC calculations.

    PubMed

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan

    2016-01-01

    We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes.

  12. Blast and Shock Mitigation Through the Use of Advanced Materials

    NASA Astrophysics Data System (ADS)

    Bartyczak, Susan; Edgerton, Lauren; Mock, Willis

    2017-06-01

    The dynamic response to low amplitude blast waves of four viscoelastic materials has been investigated: Dragonshield BCTM and three polyurea formulations (P1000, P650, and a P250/1000 blend). A 40-mm-bore gas gun was used as a shock tube to generate planar blast waves, ranging from 1 to 2 bars, that impacted instrumented target assemblies mounted on the gas gun muzzle. Each target assembly consisted of a viscoelastic material sample sandwiched between two gauge assemblies for measuring wave velocity and input/output stresses. Each gauge assembly consisted of one polyvinylidene fluoride (PVDF) stress gauge sandwiched between two 3.25 inch diameter 6061-T6 aluminum discs. Impedance matching techniques were used on the stress measurements to calculate the stresses on the front and back of the samples. The shock velocity-particle velocity relationship, stress-particle velocity relationship, and blast attenuation for each material were determined. The experimental technique, analysis methodology, and results will be presented.

  13. Control advances for achieving the ITER baseline scenario on KSTAR

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Barr, J.; Hahn, S. H.; Humphreys, D. A.; in, Y. K.; Jeon, Y. M.; Lanctot, M. J.; Mueller, D.; Walker, M. L.

    2017-10-01

    Control methodologies developed to enable successful production of ITER baseline scenario (IBS) plasmas on the superconducting KSTAR tokamak are presented: decoupled vertical control (DVC), real-time feedforward (rtFF) calculation, and multi-input multi-output (MIMO) X-point control. DVC provides fast vertical control with the in-vessel control coils (IVCC) while sharing slow vertical control with the poloidal field (PF) coils to avoid IVCC saturation. rtFF compensates for inaccuracies in offline PF current feedforward programming, allowing reduction or removal of integral gain (and its detrimental phase lag) from the shape controller. Finally, MIMO X-point control provides accurate positioning of the X-point despite low controllability due to the large distance between coils and plasma. Combined, these techniques enabled achievement of IBS parameters (q95 = 3.2, βN = 2) with a scaled ITER shape on KSTAR. n =2 RMP response displays a strong dependence upon this shaping. Work supported by the US DOE under Award DE-SC0010685 and the KSTAR project.

  14. Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes

    NASA Astrophysics Data System (ADS)

    Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank

    2017-05-01

    In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO2 equivalent while it is 13.6 kg CO2 per kg of ammonia for coal-based electrolysis method.

  15. Impact Assessment and Environmental Evaluation of Various Ammonia Production Processes.

    PubMed

    Bicer, Yusuf; Dincer, Ibrahim; Vezina, Greg; Raso, Frank

    2017-05-01

    In the current study, conventional resources-based ammonia generation routes are comparatively studied through a comprehensive life cycle assessment. The selected ammonia generation options range from mostly used steam methane reforming to partial oxidation of heavy oil. The chosen ammonia synthesis process is the most common commercially available Haber-Bosch process. The essential energy input for the methods are used from various conventional resources such as coal, nuclear, natural gas and heavy oil. Using the life cycle assessment methodology, the environmental impacts of selected methods are identified and quantified from cradle to gate. The life cycle assessment outcomes of the conventional resources based ammonia production routes show that nuclear electrolysis-based ammonia generation method yields the lowest global warming and climate change impacts while the coal-based electrolysis options bring higher environmental problems. The calculated greenhouse gas emission from nuclear-based electrolysis is 0.48 kg CO 2 equivalent while it is 13.6 kg CO 2 per kg of ammonia for coal-based electrolysis method.

  16. Fast and optimized methodology to generate road traffic emission inventories and their uncertainties

    NASA Astrophysics Data System (ADS)

    Blond, N.; Ho, B. Q.; Clappier, A.

    2012-04-01

    Road traffic emissions are one of the main sources of air pollution in the cities. They are also the main sources of uncertainties in the air quality numerical models used to forecast and define abatement strategies. Until now, the available models for generating road traffic emission always required a big effort, money and time. This inhibits decisions to preserve air quality, especially in developing countries where road traffic emissions are changing very fast. In this research, we developed a new model designed to fast produce road traffic emission inventories. This model, called EMISENS, combines the well-known top-down and bottom-up approaches to force them to be coherent. A Monte Carlo methodology is included for computing emission uncertainties and the uncertainty rate due to each input parameters. This paper presents the EMISENS model and a demonstration of its capabilities through an application over Strasbourg region (Alsace), France. Same input data as collected for Circul'air model (using bottom-up approach) which has been applied for many years to forecast and study air pollution by the Alsatian air quality agency, ASPA, are used to evaluate the impact of several simplifications that a user could operate . These experiments give the possibility to review older methodologies and evaluate EMISENS results when few input data are available to produce emission inventories, as in developing countries and assumptions need to be done. We show that same average fraction of mileage driven with a cold engine can be used for all the cells of the study domain and one emission factor could replace both cold and hot emission factors.

  17. Fiscal transfers based on inputs or outcomes? Lessons from the Twelfth and Thirteenth Finance Commission in India.

    PubMed

    Fan, Victoria Y; Iyer, Smriti; Kapur, Avani; Mahbub, Rifaiyat; Mukherjee, Anit

    2018-01-01

    There is limited empirical evidence about the efficacy of fiscal transfers for a specific purpose, including for health which represents an important source of funds for the delivery of public services especially in large populous countries such as India. To examine two distinct methodologies for allocating specific-purpose centre-to-state transfers, one using an input-based formula focused on equity and the other using an outcome-based formula focused on performance. We examine the Twelfth Finance Commission (12FC)'s use of Equalization Grants for Health (EGH) as an input-based formula and the Thirteenth Finance Commission (13FC)'s use of Incentive Grants for Health (IGH) as an outcome-based formula. We simulate and replicate the allocation of these two transfer methodologies and examine the consequences of these fiscal transfer mechanisms. The EGH placed conditions for releasing funds, but states varied in their ability to meet those conditions, and hence their allocations varied, eg, Madhya Pradesh received 100% and Odisha 67% of its expected allocation. Due to the design of the IGH formula, IGH allocations were unequally distributed and highly concentrated in 4 states (Manipur, Sikkim, Tamil Nadu, Nagaland), which received over half the national IGH allocation. The EGH had limited impact in achieving equalization, whereas the IGH rewards were concentrated in states which were already doing better. Greater transparency and accountability of centre-to-state allocations and specifically their methodologies are needed to ensure that allocation objectives are aligned to performance. © 2017 The Authors. The International Journal of Health Planning and Management published by John Wiley & Sons Ltd.

  18. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  19. Input Shaping to Reduce Solar Array Structural Vibrations

    NASA Technical Reports Server (NTRS)

    Doherty, Michael J.; Tolson, Robert J.

    1998-01-01

    Structural vibrations induced by actuators can be minimized using input shaping. Input shaping is a feedforward method in which actuator commands are convolved with shaping functions to yield a shaped set of commands. These commands are designed to perform the maneuver while minimizing the residual structural vibration. In this report, input shaping is extended to stepper motor actuators. As a demonstration, an input-shaping technique based on pole-zero cancellation was used to modify the Solar Array Drive Assembly (SADA) actuator commands for the Lewis satellite. A series of impulses were calculated as the ideal SADA output for vibration control. These impulses were then discretized for use by the SADA stepper motor actuator and simulated actuator outputs were used to calculate the structural response. The effectiveness of input shaping is limited by the accuracy of the knowledge of the modal frequencies. Assuming perfect knowledge resulted in significant vibration reduction. Errors of 10% in the modal frequencies caused notably higher levels of vibration. Controller robustness was improved by incorporating additional zeros in the shaping function. The additional zeros did not require increased performance from the actuator. Despite the identification errors, the resulting feedforward controller reduced residual vibrations to the level of the exactly modeled input shaper and well below the baseline cases. These results could be easily applied to many other vibration-sensitive applications involving stepper motor actuators.

  20. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  1. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  2. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  3. Carbon dioxide fluid-flow modeling and injectivity calculations

    USGS Publications Warehouse

    Burke, Lauri

    2011-01-01

    These results were used to classify subsurface formations into three permeability classifications for the probabilistic calculations of storage efficiency and containment risk of the U.S. Geological Survey geologic carbon sequestration assessment methodology. This methodology is currently in use to determine the total carbon dioxide containment capacity of the onshore and State waters areas of the United States.

  4. User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures

    USGS Publications Warehouse

    Eberl, D.D.

    2008-01-01

    HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.

  5. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  6. A third-order class-D amplifier with and without ripple compensation

    NASA Astrophysics Data System (ADS)

    Cox, Stephen M.; du Toit Mouton, H.

    2018-06-01

    We analyse the nonlinear behaviour of a third-order class-D amplifier, and demonstrate the remarkable effectiveness of the recently introduced ripple compensation (RC) technique in reducing the audio distortion of the device. The amplifier converts an input audio signal to a high-frequency train of rectangular pulses, whose widths are modulated according to the input signal (pulse-width modulation) and employs negative feedback. After determining the steady-state operating point for constant input and calculating its stability, we derive a small-signal model (SSM), which yields in closed form the transfer function relating (infinitesimal) input and output disturbances. This SSM shows how the RC technique is able to linearise the small-signal response of the device. We extend this SSM through a fully nonlinear perturbation calculation of the dynamics of the amplifier, based on the disparity in time scales between the pulse train and the audio signal. We obtain the nonlinear response of the amplifier to a general audio signal, avoiding the linearisation inherent in the SSM; we thereby more precisely quantify the reduction in distortion achieved through RC. Finally, simulations corroborate our theoretical predictions and illustrate the dramatic deterioration in performance that occurs when the amplifier is operated in an unstable regime. The perturbation calculation is rather general, and may be adapted to quantify the way in which other nonlinear negative-feedback pulse-modulated devices track a time-varying input signal that slowly modulates the system parameters.

  7. GREENSCOPE: A Method for Modeling Chemical Process ...

    EPA Pesticide Factsheets

    Current work within the U.S. Environmental Protection Agency’s National Risk Management Research Laboratory is focused on the development of a method for modeling chemical process sustainability. The GREENSCOPE methodology, defined for the four bases of Environment, Economics, Efficiency, and Energy, can evaluate processes with over a hundred different indicators. These indicators provide a means for realizing the principles of green chemistry and green engineering in the context of sustainability. Development of the methodology has centered around three focal points. One is a taxonomy of impacts that describe the indicators and provide absolute scales for their evaluation. The setting of best and worst limits for the indicators allows the user to know the status of the process under study in relation to understood values. Thus, existing or imagined processes can be evaluated according to their relative indicator scores, and process modifications can strive towards realizable targets. A second area of focus is in advancing definitions of data needs for the many indicators of the taxonomy. Each of the indicators has specific data that is necessary for their calculation. Values needed and data sources have been identified. These needs can be mapped according to the information source (e.g., input stream, output stream, external data, etc.) for each of the bases. The user can visualize data-indicator relationships on the way to choosing selected ones for evalua

  8. Solar tower cavity receiver aperture optimization based on transient optical and thermo-hydraulic modeling

    NASA Astrophysics Data System (ADS)

    Schöttl, Peter; Bern, Gregor; van Rooyen, De Wet; Heimsath, Anna; Fluri, Thomas; Nitz, Peter

    2017-06-01

    A transient simulation methodology for cavity receivers for Solar Tower Central Receiver Systems with molten salt as heat transfer fluid is described. Absorbed solar radiation is modeled with ray tracing and a sky discretization approach to reduce computational effort. Solar radiation re-distribution in the cavity as well as thermal radiation exchange are modeled based on view factors, which are also calculated with ray tracing. An analytical approach is used to represent convective heat transfer in the cavity. Heat transfer fluid flow is simulated with a discrete tube model, where the boundary conditions at the outer tube surface mainly depend on inputs from the previously mentioned modeling aspects. A specific focus is put on the integration of optical and thermo-hydraulic models. Furthermore, aiming point and control strategies are described, which are used during the transient performance assessment. Eventually, the developed simulation methodology is used for the optimization of the aperture opening size of a PS10-like reference scenario with cavity receiver and heliostat field. The objective function is based on the cumulative gain of one representative day. Results include optimized aperture opening size, transient receiver characteristics and benefits of the implemented aiming point strategy compared to a single aiming point approach. Future work will include annual simulations, cost assessment and optimization of a larger range of receiver parameters.

  9. Subsea release of oil from a riser: an ecological risk assessment.

    PubMed

    Nazir, Muddassir; Khan, Faisal; Amyotte, Paul; Sadiq, Rehan

    2008-10-01

    This study illustrates a newly developed methodology, as a part of the U.S. EPA ecological risk assessment (ERA) framework, to predict exposure concentrations in a marine environment due to underwater release of oil and gas. It combines the hydrodynamics of underwater blowout, weathering algorithms, and multimedia fate and transport to measure the exposure concentration. Naphthalene and methane are used as surrogate compounds for oil and gas, respectively. Uncertainties are accounted for in multimedia input parameters in the analysis. The 95th percentile of the exposure concentration (EC(95%)) is taken as the representative exposure concentration for the risk estimation. A bootstrapping method is utilized to characterize EC(95%) and associated uncertainty. The toxicity data of 19 species available in the literature are used to calculate the 5th percentile of the predicted no observed effect concentration (PNEC(5%)) by employing the bootstrapping method. The risk is characterized by transforming the risk quotient (RQ), which is the ratio of EC(95%) to PNEC(5%), into a cumulative risk distribution. This article describes a probabilistic basis for the ERA, which is essential from risk management and decision-making viewpoints. Two case studies of underwater oil and gas mixture release, and oil release with no gaseous mixture are used to show the systematic implementation of the methodology, elements of ERA, and the probabilistic method in assessing and characterizing the risk.

  10. Stakeholder Meetings on Black Carbon from Diesel Sources in the Russian Arctic

    EPA Pesticide Factsheets

    From January 28-February 1, 2013, EPA and its partners held meetings in Murmansk and Moscow with key Russian stakeholders to gather input into the project’s emissions inventory methodologies and potential pilot project ideas.

  11. A distorted-wave methodology for electron-ion impact excitation - Calculation for two-electron ions

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Temkin, A.

    1977-01-01

    A distorted-wave program is being developed for calculating the excitation of few-electron ions by electron impact. It uses the exchange approximation to represent the exact initial-state wavefunction in the T-matrix expression for the excitation amplitude. The program has been implemented for excitation of the 2/1,3/(S,P) states of two-electron ions. Some of the astrophysical applications of these cross sections as well as the motivation and requirements of the calculational methodology are discussed.

  12. Aeolian controls of soil geochemistry and weathering fluxes in high-elevation ecosystems of the Rocky Mountains, Colorado

    USGS Publications Warehouse

    Lawrence, Corey R.; Reynolds, Richard L.; Kettterer, Michael E.; Neff, Jason C.

    2013-01-01

    When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50–80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.

  13. Aeolian controls of soil geochemistry and weathering fluxes in high-elevation ecosystems of the Rocky Mountains, Colorado

    NASA Astrophysics Data System (ADS)

    Lawrence, Corey R.; Reynolds, Richard L.; Ketterer, Michael E.; Neff, Jason C.

    2013-04-01

    When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50-80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.

  14. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  15. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe370mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  16. Broadband Heating Rate Profile Project (BBHRP) - SGP 1bbhrpripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  17. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  18. Do Community Based Initiatives foster sustainability transitions? Towards a unique Environmental Impact Assessment.

    NASA Astrophysics Data System (ADS)

    Martellozzo, Federico; Hendrickson, Cary; Gozdowska, Iga; Groß, Helge; Henderson, Charles; Reusser, Dominik

    2015-04-01

    The active participation in Community Based Initiatives (CBI) is a spreading phenomenon that has reached a significant magnitude and - in some cases - CBIs are also supposed to have catalysed social and technological innovation, thus contributing to global transition into low-carbon economy. Generally speaking, CBIs are grassroots initiatives with broad sustainability foci that promote a plethora of activities such as alternative transportation, urban gardening, renewable energy implementation, waste regeneration/reduction, etc. Some advocate that such practices fostered by bottom-up activities, rather than top-down policies, represent a proficient countermeasure to alleviate global environmental change and effectively foster a societal transition towards sustainability. However, thus far most empirical research grounds mainly on anecdotal evidence and little work has been done to quantitatively assess CBIs' "environmental impacts" (EI) or their carbon footprints using comparative methodologies. This research main aim is to frame a methodology to assess univocally CBIs' EIs which are crucial to understanding their role in societal sustainability transition. However, to do so, three main caveats need to be addressed: first, some CBIs do not directly produce tangible measurable outputs, nor have an intelligibly defined set of inputs (e.g. CBIs focusing on environmental education and awareness rising). Thus, calculating their "indirect" EI may represent an intricate puzzle that is very much open to subjective interpretation. Second, CBIs' practices are heterogenic and therefore existing methodologies to make comparisons of their EIs are neither straightforward nor proficient, also given the lack of available data. Third, another issue closely related to the one previously mentioned, is a general lack of consensus among already existing impact-assessment frameworks for certain practices (e.g. composting). A potential way of estimating a CBI's EI is a standard Carbon Accounting assessment where all possible sources and inputs are assessed in terms of reduced EI in the conversion and production of outputs. However, this is a very complex and time consuming task for which data availability issues abound. Alternatively, the EI per unit of output of each CBI can be evaluated and compared with the equivalent from a standard counterfactual in a sort of Comparative Carbon Accounting fashion. This will result in an assessment of the EI Assessment (EIA) that is not activity-specific and can be reasonably used for a wide spectrum comparison regardless of a CBI's predominant activity. This paper first theoretically frames the obstacles to be overcome in conceptualizing a meaningful EI assessment. Second, context variables such as conversion factors, counterfactuals for numerous European CBIs in various countries are established (the latters were mapped by the TESS-Transition FP7 Project). Third, an original EI indicator for CBI based on a Comparative Carbon Accounting methodology is proposed and tested. Finally, some preliminary findings from the application of this methodology to the investigated CBIs are presented, and a potential comparison of these preliminary results with some of the planetary boundaries is discussed. While we are aware that several caveats still need to be further explored and addressed, this novel application of a comparative methodology offers much to the existing literature on CBIs' impact assessment.

  19. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  20. 78 FR 72862 - Wooden Bedroom Furniture From the People's Republic of China: Notice of Court Decision Not in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-04

    ... Redetermination I regarding the surrogate value for the input poly foam,\\4\\ which the Court sustained in Home... inputs, poly foam, and the calculation of the surrogate financial ratios, constitutes a final decision of...

  1. cluster trials v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, John; Castillo, Andrew

    2016-09-21

    This software contains a set of python modules – input, search, cluster, analysis; these modules read input files containing spatial coordinates and associated attributes which can be used to perform nearest neighbor search (spatial indexing via kdtree), cluster analysis/identification, and calculation of spatial statistics for analysis.

  2. 7 CFR 1424.4 - General eligibility rules.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF AGRICULTURE LOANS, PURCHASES, AND OTHER OPERATIONS BIOENERGY PROGRAM § 1424.4 General eligibility.... (d) For producers not purchasing raw commodity inputs, the production must equal or exceed that amount of production that would be calculated using the raw commodity inputs and the conversion factor...

  3. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  4. Waste Package Component Design Methodology Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.C. Mecham

    2004-07-12

    This Executive Summary provides an overview of the methodology being used by the Yucca Mountain Project (YMP) to design waste packages and ancillary components. This summary information is intended for readers with general interest, but also provides technical readers a general framework surrounding a variety of technical details provided in the main body of the report. The purpose of this report is to document and ensure appropriate design methods are used in the design of waste packages and ancillary components (the drip shields and emplacement pallets). The methodology includes identification of necessary design inputs, justification of design assumptions, and usemore » of appropriate analysis methods, and computational tools. This design work is subject to ''Quality Assurance Requirements and Description''. The document is primarily intended for internal use and technical guidance for a variety of design activities. It is recognized that a wide audience including project management, the U.S. Department of Energy (DOE), the U.S. Nuclear Regulatory Commission, and others are interested to various levels of detail in the design methods and therefore covers a wide range of topics at varying levels of detail. Due to the preliminary nature of the design, readers can expect to encounter varied levels of detail in the body of the report. It is expected that technical information used as input to design documents will be verified and taken from the latest versions of reference sources given herein. This revision of the methodology report has evolved with changes in the waste package, drip shield, and emplacement pallet designs over many years and may be further revised as the design is finalized. Different components and analyses are at different stages of development. Some parts of the report are detailed, while other less detailed parts are likely to undergo further refinement. The design methodology is intended to provide designs that satisfy the safety and operational requirements of the YMP. Four waste package configurations have been selected to illustrate the application of the methodology during the licensing process. These four configurations are the 21-pressurized water reactor absorber plate waste package (21-PWRAP), the 44-boiling water reactor waste package (44-BWR), the 5 defense high-level radioactive waste (HLW) DOE spent nuclear fuel (SNF) codisposal short waste package (5-DHLWDOE SNF Short), and the naval canistered SNF long waste package (Naval SNF Long). Design work for the other six waste packages will be completed at a later date using the same design methodology. These include the 24-boiling water reactor waste package (24-BWR), the 21-pressurized water reactor control rod waste package (21-PWRCR), the 12-pressurized water reactor waste package (12-PWR), the 5 defense HLW DOE SNF codisposal long waste package (5-DHLWDOE SNF Long), the 2 defense HLW DOE SNF codisposal waste package (2-MC012-DHLW), and the naval canistered SNF short waste package (Naval SNF Short). This report is only part of the complete design description. Other reports related to the design include the design reports, the waste package system description documents, manufacturing specifications, and numerous documents for the many detailed calculations. The relationships between this report and other design documents are shown in Figure 1.« less

  5. A study of start-up characteristics of a potassium heat pipe from the frozen state

    NASA Technical Reports Server (NTRS)

    Jang, Jong Hoon

    1992-01-01

    The start up characteristics of a potassium heat pipe were studied both analytically and experimentally. Using the radiation heat transfer mode the heat pipe was tested in a vacuum chamber. The transition temperature calculated for potassium was then compared with the experimental results of the heat pipe with various heat inputs. These results show that the heat pipe was inactive until it reached the transition temperature. In addition, during the start up period, the evaporator experienced dry-out with a heat input smaller than the capillary limit calculated at the steady state. However, when the working fluid at the condensor was completely melted, the evaporation was rewetted without external aid. The start up period was significantly reduced with a large heat input.

  6. Modeling the Ionosphere-Thermosphere Response to a Geomagnetic Storm Using Physics-based Magnetospheric Energy Input: OpenGGCM-CTIM Results

    NASA Technical Reports Server (NTRS)

    Connor, Hyunju K.; Zesta, Eftyhia; Fedrizzi, Mariangel; Shi, Yong; Raeder, Joachim; Codrescu, Mihail V.; Fuller-Rowell, Tim J.

    2016-01-01

    The magnetosphere is a major source of energy for the Earth's ionosphere and thermosphere (IT) system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM) coupled with the Coupled Thermosphere Ionosphere Model (CTIM). OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD) equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe). CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCMCTIM reproduces localized neutral density peaks at approx. 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset, which in turn effectively heats the thermosphere and causes the neutral density increase at 400 km altitude.

  7. SURVIAC Bulletin: RPG Encounter Modeling, Vol 27, Issue 1, 2012

    DTIC Science & Technology

    2012-01-01

    return a probability of hit ( PHIT ) for the scenario. In the model, PHIT depends on the presented area of the targeted system and a set of errors infl...simplifying assumptions, is data-driven, and uses simple yet proven methodologies to determine PHIT . Th e inputs to THREAT describe the target, the RPG, and...Point on 2-D Representation of a CH-47 Th e determination of PHIT by THREAT is performed using one of two possible methodologies. Th e fi rst is a

  8. Formalization and Validation of an SADT Specification Through Executable Simulation in VHDL

    DTIC Science & Technology

    1991-12-01

    be found in (39, 40, 41). One recent summary of the SADT methodology was written by Marca and McGowan in 1988 (.32). SADT is a methodology to provide...that is required. Also, the presence of "all" inputs and controls may not be needed for the activity to proceed. Marca and McGowan (32) describe a...diagrams which describe a complete system. Marca and McGowan define an SADT Model as: "a collection of carefully coorinated descriptions, starting from a

  9. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  10. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.

  11. The HHS-HCC Risk Adjustment Model for Individual and Small Group Markets under the Affordable Care Act

    PubMed Central

    Kautter, John; Pope, Gregory C; Ingber, Melvin; Freeman, Sara; Patterson, Lindsey; Cohen, Michael; Keenan, Patricia

    2014-01-01

    Beginning in 2014, individuals and small businesses are able to purchase private health insurance through competitive Marketplaces. The Affordable Care Act (ACA) provides for a program of risk adjustment in the individual and small group markets in 2014 as Marketplaces are implemented and new market reforms take effect. The purpose of risk adjustment is to lessen or eliminate the influence of risk selection on the premiums that plans charge. The risk adjustment methodology includes the risk adjustment model and the risk transfer formula. This article is the second of three in this issue of the Review that describe the Department of Health and Human Services (HHS) risk adjustment methodology and focuses on the risk adjustment model. In our first companion article, we discuss the key issues and choices in developing the methodology. In this article, we present the risk adjustment model, which is named the HHS-Hierarchical Condition Categories (HHS-HCC) risk adjustment model. We first summarize the HHS-HCC diagnostic classification, which is the key element of the risk adjustment model. Then the data and methods, results, and evaluation of the risk adjustment model are presented. Fifteen separate models are developed. For each age group (adult, child, and infant), a model is developed for each cost sharing level (platinum, gold, silver, and bronze metal levels, as well as catastrophic plans). Evaluation of the risk adjustment models shows good predictive accuracy, both for individuals and for groups. Lastly, this article provides examples of how the model output is used to calculate risk scores, which are an input into the risk transfer formula. Our third companion paper describes the risk transfer formula. PMID:25360387

  12. The threshold bootstrap clustering: a new approach to find families or transmission clusters within molecular quasispecies.

    PubMed

    Prosperi, Mattia C F; De Luca, Andrea; Di Giambenedetto, Simona; Bracciale, Laura; Fabbiani, Massimiliano; Cauda, Roberto; Salemi, Marco

    2010-10-25

    Phylogenetic methods produce hierarchies of molecular species, inferring knowledge about taxonomy and evolution. However, there is not yet a consensus methodology that provides a crisp partition of taxa, desirable when considering the problem of intra/inter-patient quasispecies classification or infection transmission event identification. We introduce the threshold bootstrap clustering (TBC), a new methodology for partitioning molecular sequences, that does not require a phylogenetic tree estimation. The TBC is an incremental partition algorithm, inspired by the stochastic Chinese restaurant process, and takes advantage of resampling techniques and models of sequence evolution. TBC uses as input a multiple alignment of molecular sequences and its output is a crisp partition of the taxa into an automatically determined number of clusters. By varying initial conditions, the algorithm can produce different partitions. We describe a procedure that selects a prime partition among a set of candidate ones and calculates a measure of cluster reliability. TBC was successfully tested for the identification of type-1 human immunodeficiency and hepatitis C virus subtypes, and compared with previously established methodologies. It was also evaluated in the problem of HIV-1 intra-patient quasispecies clustering, and for transmission cluster identification, using a set of sequences from patients with known transmission event histories. TBC has been shown to be effective for the subtyping of HIV and HCV, and for identifying intra-patient quasispecies. To some extent, the algorithm was able also to infer clusters corresponding to events of infection transmission. The computational complexity of TBC is quadratic in the number of taxa, lower than other established methods; in addition, TBC has been enhanced with a measure of cluster reliability. The TBC can be useful to characterise molecular quasipecies in a broad context.

  13. eVerdEE: a web-based screening life-cycle assessment tool for European small and medium-sized enterprises

    NASA Astrophysics Data System (ADS)

    Naldesi, Luciano; Buttol, Patrizia; Masoni, Paolo; Misceo, Monica; Sára, Balázs

    2004-12-01

    "eLCA" is a European Commission financed project aimed at realising "On line green tools and services for Small and Medium-sized Enterprises (SMEs)". Knowledge and use of Life Cycle Assessment (LCA) by SMEs are strategic to introduce the Integrated Product Policy (IPP) in Europe, but methodology simplification is needed. LCA requires a large amount of validated general and sector specific data. Since their availability and cost can be insuperable barriers for SMEs, pre-elaborated data/meta-data, use of standards and low cost solutions are required. Within the framework of the eLCA project an LCA software - eVerdEE - based on a simplified methodology and specialised for SMEs has been developed. eVerdEE is a web-based tool with some innovative features. Its main feature is the adaptation of ISO 14040 requirements to offer easy-to-handle functions with solid scientific bases. Complex methodological problems, such as the system boundaries definition, the data quality estimation and documentation, the choice of impact categories, are simplified according to the SMEs" needs. Predefined "Goal and Scope definition" and "Inventory" forms, a user-friendly and well structured procedure are time and cost-effective. The tool is supported by a database containing pre-elaborated environmental indicators of substances and processes for different impact categories. The impact assessment is calculated automatically by using the user"s input and the database values. The results have different levels of interpretation in order to identify the life cycle critical points and the improvement options. The use of a target plot allows the direct comparison of different design alternatives.

  14. Alcohol and Traffic Safety.

    ERIC Educational Resources Information Center

    Dickman, Frances Baker, Ed.

    1988-01-01

    Seven papers discuss current issues and applied social research concerning alcohol traffic safety. Prevention, policy input, methodology, planning strategies, anti-drinking/driving programs, social-programmatic orientations of Mothers Against Drunk Driving, Kansas Driving Under the Influence Law, New Jersey Driving While Impaired Programs,…

  15. Seeded Fault Bearing Experiments: Methodology and Data Acquisition

    DTIC Science & Technology

    2011-06-01

    electronics piezoelectric ( IEPE ) transducer. Constant current biased transducers require AC coupling for the output signal. The ICP-Type Signal...the outer race I/O input/output IEPE integral electronics piezoelectric LCD liquid crystal display P&D Prognostics and Diagnostics RMS root

  16. Mobility Research for Future Vehicles: A Methodology to Create a Unified Trade-Off Environment for Advanced Aerospace Vehicle

    DTIC Science & Technology

    2018-01-31

    Language for SeBBAS ............................................................... 23 2.4.3 Running SeBBAS Algorithm in MATLAB...Input File Error Checking ................................................................................................... 76 4.4.3 Running ...99 6.2 5- Blade Rotor System Investigation

  17. Summary for Stakeholder Meetings on Black Carbon from Diesel Sources in the Russian Arctic

    EPA Pesticide Factsheets

    From January 28-February 1, 2013, EPA and its partners held meetings in Murmansk and Moscow with key Russian stakeholders to gather input into the project’s emissions inventory methodologies and potential pilot project ideas.

  18. Uncertainties in Emissions In Emissions Inputs for Near-Road Assessments

    EPA Science Inventory

    Emissions, travel demand, and dispersion models are all needed to obtain temporally and spatially resolved pollutant concentrations. Current methodology combines these three models in a bottom-up approach based on hourly traffic and emissions estimates, and hourly dispersion conc...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detilleux, Michel; Centner, Baudouin

    The paper describes different methodologies and tools developed in-house by Tractebel Engineering to facilitate the engineering works to be carried out especially in the frame of decommissioning projects. Three examples of tools with their corresponding results are presented: - The LLWAA-DECOM code, a software developed for the radiological characterization of contaminated systems and equipment. The code constitutes a specific module of more general software that was originally developed to characterize radioactive waste streams in order to be able to declare the radiological inventory of critical nuclides, in particular difficult-to-measure radionuclides, to the Authorities. In the case of LLWAA-DECOM, deposited activitiesmore » inside contaminated equipment (piping, tanks, heat exchangers...) and scaling factors between nuclides, at any given time of the decommissioning time schedule, are calculated on the basis of physical characteristics of the systems and of operational parameters of the nuclear power plant. This methodology was applied to assess decommissioning costs of Belgian NPPs, to characterize the primary system of Trino NPP in Italy, to characterize the equipment of miscellaneous circuits of Ignalina NPP and of Kozloduy unit 1 and, to calculate remaining dose rates around equipment in the frame of the preparation of decommissioning activities; - The VISIMODELLER tool, a user friendly CAD interface developed to ease the introduction of lay-out areas in a software named VISIPLAN. VISIPLAN is a 3D dose rate assessment tool for ALARA work planning, developed by the Belgian Nuclear Research Centre SCK.CEN. Both softwares were used for projects such as the steam generators replacements in Belgian NPPs or the preparation of the decommissioning of units 1 and 2 of Kozloduy NPP; - The DBS software, a software developed to manage the different kinds of activities that are part of the general time schedule of a decommissioning project. For each activity, when relevant, algorithms allow to estimate, on the basis of local inputs, radiological exposures of the operators (collective and individual doses), production of primary, secondary and tertiary waste and their characterization, production of conditioned waste, release of effluents,... and enable the calculation and the presentation (histograms) of the global results for all activities together. An example of application in the frame of the Ignalina decommissioning project is given. (authors)« less

  20. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  1. 40 CFR 98.247 - Records that must be retained.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Tier 4 Calculation Methodology in § 98.37. (b) If you comply with the mass balance methodology in § 98... with § 98.243(c)(4). (2) Start and end times and calculated carbon contents for time periods when off... determining carbon content of feedstock or product. (3) A part of the monitoring plan required under § 98.3(g...

  2. Using a Programmable Calculator to Teach Teophylline Pharmacokinetics.

    ERIC Educational Resources Information Center

    Closson, Richard Grant

    1981-01-01

    A calculator program for a Texas Instruments Model 59 to predict serum theophylline concentrations is described. The program accommodates the input of multiple dose times at irregular intervals, clearance changes due to concurrent patient diseases and age less than 17 years. The calculations for five hypothetical patients are given. (Author/MLW)

  3. Description, Usage, and Validation of the MVL-15 Modified Vortex Lattice Analysis Capability

    NASA Technical Reports Server (NTRS)

    Ozoroski, Thomas A.

    2015-01-01

    MVL-15 is the most recent version of the Modified Vortex-Lattice (MVL) code developed within the Aerodynamics Systems Analysis Branch (ASAB) at NASA LaRC. The term "modified" refers to the primary modification of the core vortex-lattice methodology: inclusion of viscous aerodynamics tables that are linked to the linear solution via iterative processes. The inclusion of the viscous aerodynamics inherently converts the MVL-15 from a purely analytic linearized method to a semi-empirical blend which retains the rapid execution speed of the linearized method while empirically characterizing the section aerodynamics at all spanwise lattice points. The modification provides a means to assess non-linear effects on lift that occur at angles of attack near stall, and provides a means to determine the drag associated with the application of design strategies for lift augmentation such as the use of flaps or blowing. The MVL-15 code is applicable to the analyses of aircraft aerodynamics during cruise, but it is most advantageously applied to the analysis of aircraft operating in various high-lift configurations. The MVL methodology has been previously conceived and implemented; the initial concept version was delivered to the ASAB in 2001 (van Dam, C.), subsequently revised (Gelhausen, P. and Ozoroski, T. 2002 / AVID Inc., Gelhausen, P., and Roberts, M. 2004), and then overhauled (Ozoroski, T., Hahn, A. 2008). The latest version, MVL-15 has been refined to provide analysis transparency and enhanced to meet the analysis requirements of the Environmentally Responsible Aviation (ERA) Project. Each revision has been implemented with reasonable success. Separate applications of the methodology are in use, including a similar in-house capability, developed by Olson, E. that is tailored for structural and acoustics analyses. A central premise of the methodology is that viscous aerodynamic data can be associated with analytic inviscid aerodynamic results at each spanwise wing section, thereby providing a pathway to map viscous data to the inviscid results. However, a number of factors can sidetrack the analysis consistency during various stages of this process. For example, it should be expected that the final airplane lift curve and drag polar results depend strongly on the geometry and aerodynamics of the airfoil section; however, flap deflections and flap chord extensions change the local reference geometry of the input airfoil, the airplane wing, the tabulated non-dimensional viscous aerodynamics, and the spanwise links between the linear and the viscous aerodynamics. These changes also affect the bound circulation and therefore, calculation and integration of the induced angle of attack and induced drag. MVL-15 is configured to ensure these types of challenges are properly addressed. This report is a comprehensive manual describing the theory, use, and validation of the MVL-15 analysis tool. Section 3 summarizes theoretical, procedural, and characteristic features of MVL-15, and includes a list of the files required to setup, execute, and summarize an analysis. Section 4, Section 5, Section 6, and Section 7 combine to comprise the User's Guide portions of this report. The MVL-15 input and output files are described in Section 4 and Section 5, respectively; the descriptions are supplemented with example files and information about the file formats, parameter definitions, and typical parameter values. Section 6 describes the Wing Geometry Setup Utility and the 2d-Variants Utility files that simplify and assist setting up a consistent set of MVL-15 geometry and aerodynamics input parameters and input files. Section 7 describes the use of the 3d-Results Presentation Utility file that can be used to automatically create summary tables and charts from the MVL-15 output files. Section 8 documents the Validation Results of an extensive and varied validation test matrix, including results of an airplane analysis representative of the ERA Program. A start-to-finish example of the airplane analysis procedure is described in Section 7.

  4. Design and numerical evaluation of full-authority flight control systems for conventional and thruster-augmented helicopters employed in NOE operations

    NASA Technical Reports Server (NTRS)

    Perri, Todd A.; Mckillip, R. M., Jr.; Curtiss, H. C., Jr.

    1987-01-01

    The development and methodology is presented for development of full-authority implicit model-following and explicit model-following optimal controllers for use on helicopters operating in the Nap-of-the Earth (NOE) environment. Pole placement, input-output frequency response, and step input response were used to evaluate handling qualities performance. The pilot was equipped with velocity-command inputs. A mathematical/computational trajectory optimization method was employed to evaluate the ability of each controller to fly NOE maneuvers. The method determines the optimal swashplate and thruster input histories from the helicopter's dynamics and the prescribed geometry and desired flying qualities of the maneuver. Three maneuvers were investigated for both the implicit and explicit controllers with and without auxiliary propulsion installed: pop-up/dash/descent, bob-up at 40 knots, and glideslope. The explicit controller proved to be superior to the implicit controller in performance and ease of design.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, J.E.; Roussin, R.W.; Gilpin, H.

    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports -more » ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.« less

  6. Top-down methodology for human factors research

    NASA Technical Reports Server (NTRS)

    Sibert, J.

    1983-01-01

    User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.

  7. Regional robust stabilisation and domain-of-attraction estimation for MIMO uncertain nonlinear systems with input saturation

    NASA Astrophysics Data System (ADS)

    Azizi, S.; Torres, L. A. B.; Palhares, R. M.

    2018-01-01

    The regional robust stabilisation by means of linear time-invariant state feedback control for a class of uncertain MIMO nonlinear systems with parametric uncertainties and control input saturation is investigated. The nonlinear systems are described in a differential algebraic representation and the regional stability is handled considering the largest ellipsoidal domain-of-attraction (DOA) inside a given polytopic region in the state space. A novel set of sufficient Linear Matrix Inequality (LMI) conditions with new auxiliary decision variables are developed aiming to design less conservative linear state feedback controllers with corresponding larger DOAs, by considering the polytopic description of the saturated inputs. A few examples are presented showing favourable comparisons with recently published similar control design methodologies.

  8. High dynamic range charge measurements

    DOEpatents

    De Geronimo, Gianluigi

    2012-09-04

    A charge amplifier for use in radiation sensing includes an amplifier, at least one switch, and at least one capacitor. The switch selectively couples the input of the switch to one of at least two voltages. The capacitor is electrically coupled in series between the input of the amplifier and the input of the switch. The capacitor is electrically coupled to the input of the amplifier without a switch coupled therebetween. A method of measuring charge in radiation sensing includes selectively diverting charge from an input of an amplifier to an input of at least one capacitor by selectively coupling an output of the at least one capacitor to one of at least two voltages. The input of the at least one capacitor is operatively coupled to the input of the amplifier without a switch coupled therebetween. The method also includes calculating a total charge based on a sum of the amplified charge and the diverted charge.

  9. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  10. Recommended System Design for the Occupational Health Management Information System (OHMIS). Volume 1.

    DTIC Science & Technology

    1983-04-01

    Management Information System (OHMIS). The system design includes: detailed function data flows for each of the core data processing functions of OHMIS, in the form of input/processing/output algorithms; detailed descriptions of the inputs and outputs; performance specifications of OHMIS; resources required to develop and operate OHMIS (Vol II). In addition, the report provides a summary of the rationale used to develop the recommended system design, a description of the methodology used to develop the recommended system design, and a review of existing

  11. Regionalization of meso-scale physically based nitrogen modeling outputs to the macro-scale by the use of regression trees

    NASA Astrophysics Data System (ADS)

    Künne, A.; Fink, M.; Kipka, H.; Krause, P.; Flügel, W.-A.

    2012-06-01

    In this paper, a method is presented to estimate excess nitrogen on large scales considering single field processes. The approach was implemented by using the physically based model J2000-S to simulate the nitrogen balance as well as the hydrological dynamics within meso-scale test catchments. The model input data, the parameterization, the results and a detailed system understanding were used to generate the regression tree models with GUIDE (Loh, 2002). For each landscape type in the federal state of Thuringia a regression tree was calibrated and validated using the model data and results of excess nitrogen from the test catchments. Hydrological parameters such as precipitation and evapotranspiration were also used to predict excess nitrogen by the regression tree model. Hence they had to be calculated and regionalized as well for the state of Thuringia. Here the model J2000g was used to simulate the water balance on the macro scale. With the regression trees the excess nitrogen was regionalized for each landscape type of Thuringia. The approach allows calculating the potential nitrogen input into the streams of the drainage area. The results show that the applied methodology was able to transfer the detailed model results of the meso-scale catchments to the entire state of Thuringia by low computing time without losing the detailed knowledge from the nitrogen transport modeling. This was validated with modeling results from Fink (2004) in a catchment lying in the regionalization area. The regionalized and modeled excess nitrogen correspond with 94%. The study was conducted within the framework of a project in collaboration with the Thuringian Environmental Ministry, whose overall aim was to assess the effect of agro-environmental measures regarding load reduction in the water bodies of Thuringia to fulfill the requirements of the European Water Framework Directive (Bäse et al., 2007; Fink, 2006; Fink et al., 2007).

  12. Sign Language Studies with Chimpanzees and Children.

    ERIC Educational Resources Information Center

    Van Cantfort, Thomas E.; Rimpau, James B.

    1982-01-01

    Reviews methodologies of sign language studies with chimpanzees and compares major findings of those studies with studies of human children. Considers relevance of input conditions for language acquisition, evidence used to demonstrate linguistic achievements, and application of rigorous testing procedures in developmental psycholinguistics.…

  13. Assessment of Consumer Health Education Needs of DeWitt MEDDAC, Fort Belvoir, Virginia.

    DTIC Science & Technology

    1975-03-01

    current patient education programs; to determine educational methodologies used for current patient education programs; to determine resources, both...technological and personnel, used for current patient education programs; to systematically identify local consumer health education needs from input by

  14. Financial crisis, virtual carbon in global value chains, and the importance of linkage effects. The Spain-china case.

    PubMed

    López, Luis-Antonio; Arce, Guadalupe; Zafrilla, Jorge

    2014-01-01

    Trade has a disproportionate environmental impact, while the international fragmentation of production promotes different patterns of intermediate inputs and final goods. Therefore, we split up the balance of domestic embodied emissions in trade (BDEET) to assess it. We find that Spain has a significant emissions deficit with China between 2005 and 2011. The Global Financial Crisis of 2008 reduced Spanish imports of pollution-intensive inputs from China and slightly improved the BDEET. China primarily exports indirect virtual carbon, representing 86% of the total, especially from Production of electricity, gas, and water sector. These linkages effects in China indicate that post-Kyoto agreements must focus not only on traded goods but also on the environmental efficiency of all domestic production chains. The methodology proposed allows us to identify the agents responsible for this trade in both Spain and China, namely the sectors importing intermediate inputs (Construction and Transport equipment) and industries and consumers importing final goods (Textiles, Other manufactures, Computers, and Machinery). The relevant sectors uncertainties found when we compare the results for BDEET and emissions embodied in bilateral trade (BEET) lead us to recommend the former methodology to evaluate the implications of environmental and energy policy for different industries and agents.

  15. UPIOM: a new tool of MFA and its application to the flow of iron and steel associated with car production.

    PubMed

    Nakamura, Shinichiro; Kondo, Yasushi; Matsubae, Kazuyo; Nakajima, Kenichi; Nagasaka, Tetsuya

    2011-02-01

    Identification of the flow of materials and substances associated with a product system provides useful information for Life Cycle Analysis (LCA), and contributes to extending the scope of complementarity between LCA and Materials Flow Analysis/Substances Flow Analysis (MFA/SFA), the two major tools of industrial ecology. This paper proposes a new methodology based on input-output analysis for identifying the physical input-output flow of individual materials that is associated with the production of a unit of given product, the unit physical input-output by materials (UPIOM). While the Sankey diagram has been a standard tool for the visualization of MFA/SFA, with an increase in the complexity of the flows under consideration, which will be the case when economy-wide intersectoral flows of materials are involved, the Sankey diagram may become too complex for effective visualization. An alternative way to visually represent material flows is proposed which makes use of triangulation of the flow matrix based on degrees of fabrication. The proposed methodology is applied to the flow of pig iron and iron and steel scrap that are associated with the production of a passenger car in Japan. Its usefulness to identify a specific MFA pattern from the original IO table is demonstrated.

  16. 39 CFR 3010.12 - Contents of notice of rate adjustment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... information must be supported by workpapers in which all calculations are shown and all input values, including all relevant CPI-U values, are listed with citations to the original sources. (2) A schedule... input values, including current rates, new rates, and billing determinants, are listed with citations to...

  17. 47 CFR 73.1820 - Station log.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... values): (A) Common point current. (B) When the operating power is determined by the indirect method, the efficiency factor F and either the product of the final amplifier input voltage and current or the calculated antenna input power. See § 73.51(e). (C) Antenna monitor phase or phase deviation indications. (D) Antenna...

  18. 40 CFR 98.113 - Calculating GHG emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EAFs using the carbon mass balance procedure specified in paragraphs (b)(2)(i) and (b)(2)(ii) of this section. (i) For each EAF, determine the annual mass of carbon in each carbon-containing input and output... section. Carbon-containing input materials include carbon electrodes and carbonaceous reducing agents. If...

  19. 40 CFR 98.113 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EAFs using the carbon mass balance procedure specified in paragraphs (b)(2)(i) and (b)(2)(ii) of this section. (i) For each EAF, determine the annual mass of carbon in each carbon-containing input and output... section. Carbon-containing input materials include carbon electrodes and carbonaceous reducing agents. If...

  20. 40 CFR 98.113 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EAFs using the carbon mass balance procedure specified in paragraphs (b)(2)(i) and (b)(2)(ii) of this section. (i) For each EAF, determine the annual mass of carbon in each carbon-containing input and output... section. Carbon-containing input materials include carbon electrodes and carbonaceous reducing agents. If...

Top