Sample records for input parameters needed

  1. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  2. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energymore » Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.« less

  3. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  4. Department of Defense meteorological and environmental inputs to aviation systems

    NASA Technical Reports Server (NTRS)

    Try, P. D.

    1983-01-01

    Recommendations based on need, cost, and achievement of flight safety are offered, and the re-evaluation of weather parameters needed for safe landing operations that lead to reliable and consistent automated observation capabilities are considered.

  5. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  6. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  7. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  8. Desktop Application Program to Simulate Cargo-Air-Drop Tests

    NASA Technical Reports Server (NTRS)

    Cuthbert, Peter

    2009-01-01

    The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.

  9. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  10. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  11. THE ECONOMICS OF REPROCESSING vs DIRECT DISPOSAL OF SPENT NUCLEAR FUEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Bunn; Steve Fetter; John P. Holdren

    This report assesses the economics of reprocessing versus direct disposal of spent nuclear fuel. The breakeven uranium price at which reprocessing spent nuclear fuel from existing light-water reactors (LWRs) and recycling the resulting plutonium and uranium in LWRs would become economic is assessed, using central estimates of the costs of different elements of the nuclear fuel cycle (and other fuel cycle input parameters), for a wide range of range of potential reprocessing prices. Sensitivity analysis is performed, showing that the conclusions reached are robust across a wide range of input parameters. The contribution of direct disposal or reprocessing and recyclingmore » to electricity cost is also assessed. The choice of particular central estimates and ranges for the input parameters of the fuel cycle model is justified through a review of the relevant literature. The impact of different fuel cycle approaches on the volume needed for geologic repositories is briefly discussed, as are the issues surrounding the possibility of performing separations and transmutation on spent nuclear fuel to reduce the need for additional repositories. A similar analysis is then performed of the breakeven uranium price at which deploying fast neutron breeder reactors would become competitive compared with a once-through fuel cycle in LWRs, for a range of possible differences in capital cost between LWRs and fast neutron reactors. Sensitivity analysis is again provided, as are an analysis of the contribution to electricity cost, and a justification of the choices of central estimates and ranges for the input parameters. The equations used in the economic model are derived and explained in an appendix. Another appendix assesses the quantities of uranium likely to be recoverable worldwide in the future at a range of different possible future prices.« less

  12. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  13. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.

  14. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  15. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  16. Using Natural Language to Enhance Mission Effectiveness

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Meszaros, Erica

    2016-01-01

    The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.

  17. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.

  18. Hardron production and neutrino beams

    NASA Astrophysics Data System (ADS)

    Guglielmi, A.

    2006-11-01

    The precise measurements of the neutrino mixing parameters in the oscillation experiments at accelerators require new high-intensity and high-purity neutrino beams. Ancillary hadron-production measurements are then needed as inputs to precise calculation of neutrino beams and of atmospheric neutrino fluxes.

  19. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  20. Electron-impact excitation of He: Dependence of electron-photon coherence parameters on the principal quantum number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, P.; Khakoo, M.A.; McConkey, J.W.

    1987-12-01

    Measurements are presented representing a complete set of electron-photon polarization correlation parameters for the excitation of n /sup 1/P states of He at an incident energy of 80 eV and an electron scattering angle of 20/sup 0/. The data support the predictions of a recent theoretical paper that these parameters should exhibit little variation with n. However, disagreement in absolute values between experiment and theory indicates the need for additional theoretical input into the problem.

  1. A Lagrangian Subgridscale Model for Particle Transport Improvement and Application in the Adriatic Sea Using the Navy Coastal Ocean Model

    DTIC Science & Technology

    2006-12-01

    based on input statistical parameters , such as the turbulent velocity fluc- tuation and correlation time scale, without the need of an underlying...8217mVr) 2 + (ar, r- ;m Vm) 2 (8) Tr + Tm which is zero if the model and real parameters coincide. The correlation coefficient rmc between the...well correlated with the latter. The parameters estimated from the corrected velocity, Real(top), Model(mid), Corrected(bottom), Tm=1.5, Gm=l 0, Tr

  2. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  3. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  4. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  5. FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...

  6. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  7. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  8. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  9. Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Bugajski, D.; Sain, M.

    1985-01-01

    The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.

  10. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  11. Framework for Mapping of Receiver Interference Tolerance Masks to Tolerable Effective Isotropic Radiated Power Levels : GPS ABC Workshop V RTCA Washington, DC October 14, 2016.

    DOT National Transportation Integrated Search

    2016-10-14

    Outline : : Interference Tolerance Mask (ITM) to Effective Isotropic Radiated Power (IRP) for the particular case of a single transmitter : : ITM() to IRP() for the general case of multiple transmitters : : Input parameters needed to solv...

  12. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  13. FORTRAN program for predicting off-design performance of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Wasserbauer, C. A.; Glassman, A. J.

    1975-01-01

    The FORTRAN IV program uses a one-dimensional solution of flow conditions through the turbine along the mean streamline. The program inputs needed are the design-point requirements and turbine geometry. The output includes performance and velocity-diagram parameters over a range of speed and pressure ratio. Computed performance is compared with the experimental data from two radial-inflow turbines and with the performance calculated by a previous computer program. The flow equations, program listing, and input and output for a sample problem are given.

  14. Simulation tests of the optimization method of Hopfield and Tank using neural networks

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.

    1988-01-01

    The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.

  15. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  16. Client/server approach to image capturing

    NASA Astrophysics Data System (ADS)

    Tuijn, Chris; Stokes, Earle

    1998-01-01

    The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.

  17. A statistical survey of heat input parameters into the cusp thermosphere

    NASA Astrophysics Data System (ADS)

    Moen, J. I.; Skjaeveland, A.; Carlson, H. C.

    2017-12-01

    Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.

  18. Is there a `universal' dynamic zero-parameter hydrological model? Evaluation of a dynamic Budyko model in US and India

    NASA Astrophysics Data System (ADS)

    Patnaik, S.; Biswal, B.; Sharma, V. C.

    2017-12-01

    River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.

  19. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    PubMed

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  20. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  1. Analysis and Simple Circuit Design of Double Differential EMG Active Electrode.

    PubMed

    Guerrero, Federico Nicolás; Spinelli, Enrique Mario; Haberman, Marcelo Alejandro

    2016-06-01

    In this paper we present an analysis of the voltage amplifier needed for double differential (DD) sEMG measurements and a novel, very simple circuit for implementing DD active electrodes. The three-input amplifier that standalone DD active electrodes require is inherently different from a differential amplifier, and general knowledge about its design is scarce in the literature. First, the figures of merit of the amplifier are defined through a decomposition of its input signal into three orthogonal modes. This analysis reveals a mode containing EMG crosstalk components that the DD electrode should reject. Then, the effect of finite input impedance is analyzed. Because there are three terminals, minimum bounds for interference rejection ratios due to electrode and input impedance unbalances with two degrees of freedom are obtained. Finally, a novel circuit design is presented, including only a quadruple operational amplifier and a few passive components. This design is nearly as simple as the branched electrode and much simpler than the three instrumentation amplifier design, while providing robust EMG crosstalk rejection and better input impedance using unity gain buffers for each electrode input. The interference rejection limits of this input stage are analyzed. An easily replicable implementation of the proposed circuit is described, together with a parameter design guideline to adjust it to specific needs. The electrode is compared with the established alternatives, and sample sEMG signals are obtained, acquired on different body locations with dry contacts, successfully rejecting interference sources.

  2. Neural integrators for decision making: a favorable tradeoff between robustness and sensitivity

    PubMed Central

    Cain, Nicholas; Barreiro, Andrea K.; Shadlen, Michael

    2013-01-01

    A key step in many perceptual decision tasks is the integration of sensory inputs over time, but a fundamental questions remain about how this is accomplished in neural circuits. One possibility is to balance decay modes of membranes and synapses with recurrent excitation. To allow integration over long timescales, however, this balance must be exceedingly precise. The need for fine tuning can be overcome via a “robust integrator” mechanism in which momentary inputs must be above a preset limit to be registered by the circuit. The degree of this limiting embodies a tradeoff between sensitivity to the input stream and robustness against parameter mistuning. Here, we analyze the consequences of this tradeoff for decision-making performance. For concreteness, we focus on the well-studied random dot motion discrimination task and constrain stimulus parameters by experimental data. We show that mistuning feedback in an integrator circuit decreases decision performance but that the robust integrator mechanism can limit this loss. Intriguingly, even for perfectly tuned circuits with no immediate need for a robustness mechanism, including one often does not impose a substantial penalty for decision-making performance. The implication is that robust integrators may be well suited to subserve the basic function of evidence integration in many cognitive tasks. We develop these ideas using simulations of coupled neural units and the mathematics of sequential analysis. PMID:23446688

  3. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  4. Experience of the JPL Exploratory Data Analysis Team at validating HIRS2/MSU cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Granger-Gallegos, Stephanie; Pursch, Andrew; Delgenio, Anthony

    1992-01-01

    Validation of the HIRS2/MSU cloud parameters began with the cloud/climate feedback problem. The derived effective cloud amount is less sensitive to surface temperature for higher clouds. This occurs because as the cloud elevation increases, the difference between surface temperature and cloud temperature increases, so only a small change in cloud amount is needed to effect a large change in radiance at the detector. By validating the cloud parameters it is meant 'developing a quantitative sense for the physical meaning of the measured parameters', by: (1) identifying the assumptions involved in deriving parameters from the measured radiances, (2) testing the input data and derived parameters for statistical error, sensitivity, and internal consistency, and (3) comparing with similar parameters obtained from other sources using other techniques.

  5. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  6. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  7. Generalized approach to cooling charge-coupled devices using thermoelectric coolers

    NASA Technical Reports Server (NTRS)

    Petrick, S. Walter

    1987-01-01

    This paper is concerned with the use of thermoelectric coolers (TECs) to cool charge-coupled devices (CCDs). Heat inputs to the CCD from the warmer environment are identified, and generalized graphs are used to approximate the major heat inputs. A method of choosing and estimating the power consumption of the TEC is discussed. This method includes the use of TEC performance information supplied by the manufacturer and equations derived from this information. Parameters of the equations are tabulated to enable the reader to use the TEC performance equations for choosing and estimating the power needed for specific TEC applications.

  8. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  9. MASTOS: Mammography Simulation Tool for design Optimization Studies.

    PubMed

    Spyrou, G; Panayiotakis, G; Tzanakos, G

    2000-01-01

    Mammography is a high quality imaging technique for the detection of breast lesions, which requires dedicated equipment and optimum operation. The design parameters of a mammography unit have to be decided and evaluated before the construction of such a high cost of apparatus. The optimum operational parameters also must be defined well before the real breast examination. MASTOS is a software package, based on Monte Carlo methods, that is designed to be used as a simulation tool in mammography. The input consists of the parameters that have to be specified when using a mammography unit, and also the parameters specifying the shape and composition of the breast phantom. In addition, the input may specify parameters needed in the design of a new mammographic apparatus. The main output of the simulation is a mammographic image and calculations of various factors that describe the image quality. The Monte Carlo simulation code is PC-based and is driven by an outer shell of a graphical user interface. The entire software package is a simulation tool for mammography and can be applied in basic research and/or in training in the fields of medical physics and biomedical engineering as well as in the performance evaluation of new designs of mammography units and in the determination of optimum standards for the operational parameters of a mammography unit.

  10. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  11. Automated system for generation of soil moisture products for agricultural drought assessment

    NASA Astrophysics Data System (ADS)

    Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.

  12. 76 FR 73652 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... national and regional levels; and b. Implementation of two surveys to collect the information needed to develop HIV-specific input parameters for the forecasting model, as well as to help address other research questions of the study. HRSA is requesting OMB approval to conduct a HIV clinician survey and a HIV practice...

  13. 77 FR 1495 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-10

    ... regional levels; and b. Implementation of two surveys to collect the information needed to develop HIV-specific input parameters for the forecasting model, as well as to help address other research questions of the study. HRSA is requesting OMB approval to conduct a HIV clinician survey and a HIV practice survey...

  14. Blind identification of the kinetic parameters in three-compartment models

    NASA Astrophysics Data System (ADS)

    Riabkov, Dmitri Y.; Di Bella, Edward V. R.

    2004-03-01

    Quantified knowledge of tissue kinetic parameters in the regions of the brain and other organs can offer information useful in clinical and research applications. Dynamic medical imaging with injection of radioactive or paramagnetic tracer can be used for this measurement. The kinetics of some widely used tracers such as [18F]2-fluoro-2-deoxy-D-glucose can be described by a three-compartment physiological model. The kinetic parameters of the tissue can be estimated from dynamically acquired images. Feasibility of estimation by blind identification, which does not require knowledge of the blood input, is considered analytically and numerically in this work for the three-compartment type of tissue response. The non-uniqueness of the two-region case for blind identification of kinetic parameters in three-compartment model is shown; at least three regions are needed for the blind identification to be unique. Numerical results for the accuracy of these blind identification methods in different conditions were considered. Both a separable variables least-squares (SLS) approach and an eigenvector-based algorithm for multichannel blind deconvolution approach were used. The latter showed poor accuracy. Modifications for non-uniform time sampling were also developed. Also, another method which uses a model for the blood input was compared. Results for the macroparameter K, which reflects the metabolic rate of glucose usage, using three regions with noise showed comparable accuracy for the separable variables least squares method and for the input model-based method, and slightly worse accuracy for SLS with the non-uniform sampling modification.

  15. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  16. Prescribed-performance fault-tolerant control for feedback linearisable systems with an aircraft application

    NASA Astrophysics Data System (ADS)

    Gao, Gang; Wang, Jinzhi; Wang, Xianghua

    2017-05-01

    This paper investigates fault-tolerant control (FTC) for feedback linearisable systems (FLSs) and its application to an aircraft. To ensure desired transient and steady-state behaviours of the tracking error under actuator faults, the dynamic effect caused by the actuator failures on the error dynamics of a transformed model is analysed, and three control strategies are designed. The first FTC strategy is proposed as a robust controller, which relies on the explicit information about several parameters of the actuator faults. To eliminate the need for these parameters and the input chattering phenomenon, the robust control law is later combined with the adaptive technique to generate the adaptive FTC law. Next, the adaptive control law is further improved to achieve the prescribed performance under more severe input disturbance. Finally, the proposed control laws are applied to an air-breathing hypersonic vehicle (AHV) subject to actuator failures, which confirms the effectiveness of the proposed strategies.

  17. Conditional parametric models for storm sewer runoff

    NASA Astrophysics Data System (ADS)

    Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.

    2007-05-01

    The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.

  18. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  19. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  20. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  1. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  2. Display device for indicating the value of a parameter in a process plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  3. W3MAMCAT: a world wide web based tool for mammillary and catenary compartmental modeling and expert system distinguishability.

    PubMed

    Russell, Solomon; Distefano, Joseph J

    2006-07-01

    W(3)MAMCAT is a new web-based and interactive system for building and quantifying the parameters or parameter ranges of n-compartment mammillary and catenary model structures, with input and output in the first compartment, from unstructured multiexponential (sum-of-n-exponentials) models. It handles unidentifiable as well as identifiable models and, as such, provides finite parameter interval solutions for unidentifiable models, whereas direct parameter search programs typically do not. It also tutorially develops the theory of model distinguishability for same order mammillary versus catenary models, as did its desktop application predecessor MAMCAT+. This includes expert system analysis for distinguishing mammillary from catenary structures, given input and output in similarly numbered compartments. W(3)MAMCAT provides for universal deployment via the internet and enhanced application error checking. It uses supported Microsoft technologies to form an extensible application framework for maintaining a stable and easily updatable application. Most important, anybody, anywhere, is welcome to access it using Internet Explorer 6.0 over the internet for their teaching or research needs. It is available on the Biocybernetics Laboratory website at UCLA: www.biocyb.cs.ucla.edu.

  4. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  5. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  6. A comparative evaluation of genome assembly reconciliation tools.

    PubMed

    Alhakami, Hind; Mirebrahim, Hamid; Lonardi, Stefano

    2017-05-18

    The majority of eukaryotic genomes are unfinished due to the algorithmic challenges of assembling them. A variety of assembly and scaffolding tools are available, but it is not always obvious which tool or parameters to use for a specific genome size and complexity. It is, therefore, common practice to produce multiple assemblies using different assemblers and parameters, then select the best one for public release. A more compelling approach would allow one to merge multiple assemblies with the intent of producing a higher quality consensus assembly, which is the objective of assembly reconciliation. Several assembly reconciliation tools have been proposed in the literature, but their strengths and weaknesses have never been compared on a common dataset. We fill this need with this work, in which we report on an extensive comparative evaluation of several tools. Specifically, we evaluate contiguity, correctness, coverage, and the duplication ratio of the merged assembly compared to the individual assemblies provided as input. None of the tools we tested consistently improved the quality of the input GAGE and synthetic assemblies. Our experiments show an increase in contiguity in the consensus assembly when the original assemblies already have high quality. In terms of correctness, the quality of the results depends on the specific tool, as well as on the quality and the ranking of the input assemblies. In general, the number of misassemblies ranges from being comparable to the best of the input assembly to being comparable to the worst of the input assembly.

  7. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  8. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  9. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT.

    PubMed

    Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel

    2010-02-01

    To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.

  10. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  11. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  12. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  13. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  14. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  15. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  16. Optical sectioning microscopy using two-frame structured illumination and Hilbert-Huang data processing

    NASA Astrophysics Data System (ADS)

    Trusiak, M.; Patorski, K.; Tkaczyk, T.

    2014-12-01

    We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).

  17. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  18. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  19. Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite

    NASA Technical Reports Server (NTRS)

    Tadikonda, S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todrita, Monica

    2016-01-01

    The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments Solar UltraViolet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar back-ground-sevents impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.

  20. Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite

    NASA Technical Reports Server (NTRS)

    Tadikonda, Sivakumara S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todirita, Monica

    2016-01-01

    The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments - Solar Ultra Violet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar backgrounds/events impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.

  1. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less

  2. Adaptive fuzzy prescribed performance control for MIMO nonlinear systems with unknown control direction and unknown dead-zone inputs.

    PubMed

    Shi, Wuxi; Luo, Rui; Li, Baoquan

    2017-01-01

    In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Distributed adaptive asymptotically consensus tracking control of uncertain Euler-Lagrange systems under directed graph condition.

    PubMed

    Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin

    2017-11-01

    In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  5. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  6. Stimulated by Novelty? The Role of Psychological Needs and Perceived Creativity

    PubMed Central

    De Jonge, Kiki M. M.; Rietzschel, Eric F.; Van Yperen, Nico W.

    2018-01-01

    In the current research, we aimed to address the inconsistent finding in the brainstorming literature that cognitive stimulation sometimes results from novel input, yet other times from non-novel input. We expected and found, in three experiments, that the strength and valence of this relationship are moderated by people’s psychological needs for structure and autonomy. Specifically, the effect of novel input (vs. non-novel input), through perceived creativity, on cognitive stimulation was stronger for people who were either low in need for structure or high in need for autonomy. Also, when the input people received did not fit their needs, they experienced less psychological cognitive stimulation from this input (i.e., less task enjoyment and feeling more blocked) compared with when they did not receive any input. Hence, to create the ideal circumstances for people to achieve cognitive stimulation when brainstorming, input novelty should be aligned with their psychological needs. PMID:29405847

  7. Battle Damage Modeling

    DTIC Science & Technology

    2010-05-01

    has been an increasing move towards armor systems which are both structural and protection components at the same time. Analysis of material response...the materials can move. As the FE analysis progresses the component will move while the mesh remains motionless (Figure 4). Individual nodes and cells...this parameter. This subroutine needs many inputs, such as the speed of sound in the material , the FE size mesh and the safety factor, which prevents

  8. Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth

    DTIC Science & Technology

    2010-08-01

    Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against

  9. Contaminant Attenuation and Transport Characterization of 200-DV-1 Operable Unit Sediment Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Truex, Michael J.; Szecsody, James E.; Qafoku, Nikolla

    2017-05-15

    A laboratory study was conducted to quantify contaminant attenuation processes and associated contaminant transport parameters that are needed to evaluate transport of contaminants through the vadose zone to the groundwater. The laboratory study information, in conjunction with transport analyses, can be used as input to evaluate the feasibility of Monitored Natural Attenuation and other remedies for the 200-DV-1 Operable Unit at the Hanford Site.

  10. Functional identification of spike-processing neural circuits.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2014-02-01

    We introduce a novel approach for a complete functional identification of biophysical spike-processing neural circuits. The circuits considered accept multidimensional spike trains as their input and comprise a multitude of temporal receptive fields and conductance-based models of action potential generation. Each temporal receptive field describes the spatiotemporal contribution of all synapses between any two neurons and incorporates the (passive) processing carried out by the dendritic tree. The aggregate dendritic current produced by a multitude of temporal receptive fields is encoded into a sequence of action potentials by a spike generator modeled as a nonlinear dynamical system. Our approach builds on the observation that during any experiment, an entire neural circuit, including its receptive fields and biophysical spike generators, is projected onto the space of stimuli used to identify the circuit. Employing the reproducing kernel Hilbert space (RKHS) of trigonometric polynomials to describe input stimuli, we quantitatively describe the relationship between underlying circuit parameters and their projections. We also derive experimental conditions under which these projections converge to the true parameters. In doing so, we achieve the mathematical tractability needed to characterize the biophysical spike generator and identify the multitude of receptive fields. The algorithms obviate the need to repeat experiments in order to compute the neurons' rate of response, rendering our methodology of interest to both experimental and theoretical neuroscientists.

  11. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  12. Machine learning to construct reduced-order models and scaling laws for reactive-transport applications

    NASA Astrophysics Data System (ADS)

    Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.

    2017-12-01

    The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.

  13. Modal Parameter Identification of a Flexible Arm System

    NASA Technical Reports Server (NTRS)

    Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard

    1998-01-01

    In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.

  14. Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development

    DTIC Science & Technology

    1986-10-01

    parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1

  15. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  16. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  17. User's manual for the HYPGEN hyperbolic grid generator and the HGUI graphical user interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Chiu, Ing-Tsau; Buning, Pieter G.

    1993-01-01

    The HYPGEN program is used to generate a 3-D volume grid over a user-supplied single-block surface grid. This is accomplished by solving the 3-D hyperbolic grid generation equations consisting of two orthogonality relations and one cell volume constraint. In this user manual, the required input files and parameters and output files are described. Guidelines on how to select the input parameters are given. Illustrated examples are provided showing a variety of topologies and geometries that can be treated. HYPGEN can be used in stand-alone mode as a batch program or it can be called from within a graphical user interface HGUI that runs on Silicon Graphics workstations. This user manual provides a description of the menus, buttons, sliders, and typein fields in HGUI for users to enter the parameters needed to run HYPGEN. Instructions are given on how to configure the interface to allow HYPGEN to run either locally or on a faster remote machine through the use of shell scripts on UNIX operating systems. The volume grid generated is copied back to the local machine for visualization using a built-in hook to PLOT3D.

  18. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  19. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    USGS Publications Warehouse

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.

  20. The series product for gaussian quantum input processes

    NASA Astrophysics Data System (ADS)

    Gough, John E.; James, Matthew R.

    2017-02-01

    We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.

  1. Design and optimization of input shapers for liquid slosh suppression

    NASA Astrophysics Data System (ADS)

    Aboel-Hassan, Ameen; Arafa, Mustafa; Nassef, Ashraf

    2009-02-01

    The need for fast maneuvering and accurate positioning of flexible structures poses a control challenge. The inherent flexibility in these lightly damped systems creates large undesirable residual vibrations in response to rapid excitations. Several control approaches have been proposed to tackle this class of problems, of which the input shaping technique is appealing in many aspects. While input shaping has been widely investigated to attenuate residual vibrations in flexible structures, less attention was granted to expand its viability in further applications. The aim of this work is to develop a methodology for applying input shaping techniques to suppress sloshing effects in open moving containers to facilitate safe and fast point-to-point movements. The liquid behavior is modeled using finite element analysis. The input shaper parameters are optimized to find the commands that would result in minimum residual vibration. Other objectives, such as improved robustness, and motion constraints such as deflection limiting are also addressed in the optimization scheme. Numerical results are verified on an experimental setup consisting of a small motor-driven water tank undergoing rectilinear motion, while measuring both the tank motion and free surface displacement of the water. The results obtained suggest that input shaping is an effective method for liquid slosh suppression.

  2. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe370mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  3. Broadband Heating Rate Profile Project (BBHRP) - SGP 1bbhrpripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  4. Broadband Heating Rate Profile Project (BBHRP) - SGP ripbe1mcfarlane

    DOE Data Explorer

    Riihimaki, Laura; Shippert, Timothy

    2014-11-05

    The objective of the ARM Broadband Heating Rate Profile (BBHRP) Project is to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Required inputs to BBHRP include surface albedo and profiles of atmospheric state (temperature, humidity), gas concentrations, aerosol properties, and cloud properties. In the past year, the Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to combine all of the input properties needed for BBHRP into a single gridded input file. Additionally, an interface between the RIPBE input file and the RRTM was developed using the new ARM integrated software development environment (ISDE) and effort was put into developing quality control (qc) flags and provenance information on the BBHRP output files so that analysis of the output would be more straightforward. This new version of BBHRP, sgp1bbhrpripbeC1.c1, uses the RIPBE files as input to RRTM, and calculates broadband SW and LW fluxes and heating rates at 1-min resolution using the independent column approximation. The vertical resolution is 45 m in the lower and middle troposphere to match the input cloud properties, but is at coarser resolution in the upper atmosphere. Unlike previous versions, the vertical grid is the same for both clear-sky and cloudy-sky calculations.

  5. Modification of the SHABERTH bearing code to incorporate RP-1 and a discussion of the traction model

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.

    1990-01-01

    Recently developed traction data for Rocket Propellant 1 (RP-1), a hydrocarbon fuel of the kerosene family, was used to develop the parameters needed by the bearing code SHABERTH in order to include RP-1 as a lubricant choice. The procedure for inputting data for a new lubricant choice is reviewed, and the theoretical fluid traction model is discussed. Comparisons are made between experimental traction data and those predicted by SHABERTH for RP-1. All data needed to modify SHABERTH for use with RP-1 as a lubricant are specified.

  6. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  8. System Identification of X-33 Neural Network

    NASA Technical Reports Server (NTRS)

    Aggarwal, Shiv

    2003-01-01

    Modern flight control research has improved spacecraft survivability as its goal. To this end we need to have a failure detection system on board. In case the spacecraft is performing imperfectly, reconfiguration of control is needed. For that purpose we need to have parameter identification of spacecraft dynamics. Parameter identification of a system is called system identification. We treat the system as a black box which receives some inputs that lead to some outputs. The question is: what kind of parameters for a particular black box can correlate the observed inputs and outputs? Can these parameters help us to predict the outputs for a new given set of inputs? This is the basic problem of system identification. The X33 was supposed to have the onboard capability of evaluating the current performance and if needed to take the corrective measures to adapt to desired performance. The X33 is comprised of both rocket and aircraft vehicle design characteristics and requires, in general, analytical methods for evaluating its flight performance. Its flight consists of four phases: ascent, transition, entry and TAEM (Terminal Area Energy Management). It spends about 200 seconds in ascent phase, reaching an altitude of about 180,000 feet and a speed of about 10 to 15 Mach. During the transition phase which lasts only about 30 seconds, its altitude may increase to about 190,000 feet but its speed is reduced to about 9 Mach. At the beginning of this phase, the Main Engine is Cut Off (MECO) and the control is reconfigured with the help of aerosurfaces (four elevons, two flaps and two rudders) and reaction control system (RCS). The entry phase brings down the altitude of X33 to about 90,000 feet and its speed to about Mach 3. It spends about 250 seconds in this phase. Main engine is still cut off and the vehicle is controlled by complex maneuvers of aerosurfaces. The last phase TAEM lasts for about 450 seconds and the altitude and speed, both are reduced to zero. The present attempt, as a start, focuses only on the entry phase. Since the main engine remains cut off in this phase, there is no thrust acting on the system. This considerably simplifies the equations of motion. We introduce another simplification by assuming the system to be linear after some non-linearities are removed analytically from our consideration. Under these assumptions, the problem could be solved by Classical Statistics by employing the least sum of squares approach. Instead we chose to use the Neural Network method. This method has many advantages. It is modern, more efficient, can be adapted to work even when the assumptions are diluted. In fact, Neural Networks try to model the human brain and are capable of pattern recognition.

  9. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  10. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  11. Assessment of space sensors for ocean pollution monitoring

    NASA Technical Reports Server (NTRS)

    Alvarado, U. R.; Tomiyasu, K.; Gulatsi, R. L.

    1980-01-01

    Several passive and active microwave, as well as passive optical remote sensors, applicable to the monitoring of oil spills and waste discharges at sea, are considered. The discussed types of measurements relate to: (1) spatial distribution and properties of the pollutant, and (2) oceanic parameters needed to predict the movement of the pollutants and their impact upon land. The sensors, operating from satellite platforms at 700-900 km altitudes, are found to be useful in mapping the spread of oil in major oil spills and in addition, can be effective in producing wind and ocean parameters as inputs to oil trajectory and dispersion models. These capabilities can be used in countermeasures.

  12. Towards a data-driven analysis of hadronic light-by-light scattering

    NASA Astrophysics Data System (ADS)

    Colangelo, Gilberto; Hoferichter, Martin; Kubis, Bastian; Procura, Massimiliano; Stoffer, Peter

    2014-11-01

    The hadronic light-by-light contribution to the anomalous magnetic moment of the muon was recently analyzed in the framework of dispersion theory, providing a systematic formalism where all input quantities are expressed in terms of on-shell form factors and scattering amplitudes that are in principle accessible in experiment. We briefly review the main ideas behind this framework and discuss the various experimental ingredients needed for the evaluation of one- and two-pion intermediate states. In particular, we identify processes that in the absence of data for doubly-virtual pion-photon interactions can help constrain parameters in the dispersive reconstruction of the relevant input quantities, the pion transition form factor and the helicity partial waves for γ*γ* → ππ.

  13. Enhancing PTFs with remotely sensed data for multi-scale soil water retention estimation

    NASA Astrophysics Data System (ADS)

    Jana, Raghavendra B.; Mohanty, Binayak P.

    2011-03-01

    SummaryUse of remotely sensed data products in the earth science and water resources fields is growing due to increasingly easy availability of the data. Traditionally, pedotransfer functions (PTFs) employed for soil hydraulic parameter estimation from other easily available data have used basic soil texture and structure information as inputs. Inclusion of surrogate/supplementary data such as topography and vegetation information has shown some improvement in the PTF's ability to estimate more accurate soil hydraulic parameters. Artificial neural networks (ANNs) are a popular tool for PTF development, and are usually applied across matching spatial scales of inputs and outputs. However, different hydrologic, hydro-climatic, and contaminant transport models require input data at different scales, all of which may not be easily available from existing databases. In such a scenario, it becomes necessary to scale the soil hydraulic parameter values estimated by PTFs to suit the model requirements. Also, uncertainties in the predictions need to be quantified to enable users to gauge the suitability of a particular dataset in their applications. Bayesian Neural Networks (BNNs) inherently provide uncertainty estimates for their outputs due to their utilization of Markov Chain Monte Carlo (MCMC) techniques. In this paper, we present a PTF methodology to estimate soil water retention characteristics built on a Bayesian framework for training of neural networks and utilizing several in situ and remotely sensed datasets jointly. The BNN is also applied across spatial scales to provide fine scale outputs when trained with coarse scale data. Our training data inputs include ground/remotely sensed soil texture, bulk density, elevation, and Leaf Area Index (LAI) at 1 km resolutions, while similar properties measured at a point scale are used as fine scale inputs. The methodology was tested at two different hydro-climatic regions. We also tested the effect of varying the support scale of the training data for the BNNs by sequentially aggregating finer resolution training data to coarser resolutions, and the applicability of the technique to upscaling problems. The BNN outputs are corrected for bias using a non-linear CDF-matching technique. Final results show good promise of the suitability of this Bayesian Neural Network approach for soil hydraulic parameter estimation across spatial scales using ground-, air-, or space-based remotely sensed geophysical parameters. Inclusion of remotely sensed data such as elevation and LAI in addition to in situ soil physical properties improved the estimation capabilities of the BNN-based PTF in certain conditions.

  14. Thermal sensation prediction by soft computing methodology.

    PubMed

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The effect of changes in space shuttle parameters on the NASA/MSFC multilayer diffusion model predictions of surface HCl concentrations

    NASA Technical Reports Server (NTRS)

    Glasser, M. E.; Rundel, R. D.

    1978-01-01

    A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.

  16. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  17. Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement

    NASA Astrophysics Data System (ADS)

    Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.

    2013-09-01

    Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.

  18. Effects of climate change on evapotranspiration over the Okavango Delta water resources

    NASA Astrophysics Data System (ADS)

    Moses, Oliver; Hambira, Wame L.

    2018-06-01

    In semi-arid developing countries, most poor people depend on contaminated surface or groundwater resources since they do not have access to safe and centrally supplied water. These water resources are threatened by several factors that include high evapotranspiration rates. In the Okavango Delta region in the north-western Botswana, communities facing insufficient centrally supplied water rely mainly on the surface water resources of the Delta. The Delta loses about 98% of its water through evapotranspiration. However, the 2% remaining water rescues the communities facing insufficient water from the main stream water supply. To understand the effects of climate change on evapotranspiration over the Okavango Delta water resources, this study analysed trends in the main climatic parameters needed as input variables in evapotranspiration models. The Mann Kendall test was used in the analysis. Trend analysis is crucial since it reveals the direction of trends in the climatic parameters, which is helpful in determining the effects of climate change on evapotranspiration. The main climatic parameters required as input variables in evapotranspiration models that were of interest in this study were wind speeds, solar radiation and relative humidity. Very little research has been conducted on these climatic parameters in the Okavango Delta region. The conducted trend analysis was more on wind speeds, which had relatively longer data records than the other two climatic parameters of interest. Generally, statistically significant increasing trends have been found, which suggests that climate change is likely to further increase evapotranspiration over the Okavango Delta water resources.

  19. Intelligent Machine Learning Approaches for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Sathyan, Anoop

    Machine Learning is a type of artificial intelligence that provides machines or networks the ability to learn from data without the need to explicitly program them. There are different kinds of machine learning techniques. This thesis discusses the applications of two of these approaches: Genetic Fuzzy Logic and Convolutional Neural Networks (CNN). Fuzzy Logic System (FLS) is a powerful tool that can be used for a wide variety of applications. FLS is a universal approximator that reduces the need for complex mathematics and replaces it with expert knowledge of the system to produce an input-output mapping using If-Then rules. The expert knowledge of a system can help in obtaining the parameters for small-scale FLSs, but for larger networks we will need to use sophisticated approaches that can automatically train the network to meet the design requirements. This is where Genetic Algorithms (GA) and EVE come into the picture. Both GA and EVE can tune the FLS parameters to minimize a cost function that is designed to meet the requirements of the specific problem. EVE is an artificial intelligence developed by Psibernetix that is trained to tune large scale FLSs. The parameters of an FLS can include the membership functions and rulebase of the inherent Fuzzy Inference Systems (FISs). The main issue with using the GFS is that the number of parameters in a FIS increase exponentially with the number of inputs thus making it increasingly harder to tune them. To reduce this issue, the FLSs discussed in this thesis consist of 2-input-1-output FISs in cascade (Chapter 4) or as a layer of parallel FISs (Chapter 7). We have obtained extremely good results using GFS for different applications at a reduced computational cost compared to other algorithms that are commonly used to solve the corresponding problems. In this thesis, GFSs have been designed for controlling an inverted double pendulum, a task allocation problem of clustering targets amongst a set of UAVs, a fire detection problem and the aircraft conflict resolution problem. During the last decade, CNNs have become increasingly popular in the domain of image and speech processing. CNNs have a lot more parameters compared to GFSs that are tuned using the back-propagation algorithm. CNNs typically have hundreds of thousands or maybe millions of parameters that are tuned using common cost functions such as integral squared error, softmax loss etc. Chapter 5 discusses a classification problem to classify images as humans or not and Chapter 6 discusses a regression task using CNN for producing an approximate near-optimal route for the Traveling Salesman Problem (TSP) which is regarded as one of the most complicated decision making problem. Both the GFS and CNN are used to develop intelligent systems specific to the application providing them computational efficiency, robustness in the face of uncertainties and scalability.

  20. Extension of the PC version of VEPFIT with input and output routines running under Windows

    NASA Astrophysics Data System (ADS)

    Schut, H.; van Veen, A.

    1995-01-01

    The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.

  1. Informing soil models using pedotransfer functions: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Romano, Nunzio

    2015-04-01

    Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.

  2. Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook

    DTIC Science & Technology

    1980-04-01

    82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance

  3. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  4. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  5. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  6. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  7. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  8. Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2017-03-01

    This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.

  9. Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2008-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.

  10. An extended plasma model for Saturn

    NASA Technical Reports Server (NTRS)

    Richardson, John D.

    1995-01-01

    The Saturn magnetosphere model of Richardson and Sittler (1990) is extended to include the outer magnetosphere. The inner magnetospheric portion of this model is updated based on a recent reanalysis of the plasma data near the Voyager 2 ring plane crossing. The result is an axially symmetric model of the plasma parameters which is designed to provide accurate input for models needing either in situ or line-of-sight data and to be a useful tool for Cassini planning.

  11. Economic Analysis of Redesign Alternatives for the RESFMS Information System

    DTIC Science & Technology

    1992-09-01

    input parameters to produce the variations of output ( Pressman , 1992). Thus, if system maintenance dictates that the procedure needs to be modified, it...easily counted, and that "a large body of literature and data predicated on LOC already exists." ( Pressman , 1992) Another term for LOC, used by Boehm... Pressman , 1992). 54 D. FUNCTION POINTS An alternative to size metrics such as LOC is the measurement of software "functionality" or "utility." Function

  12. Characterisation of the Hamamatsu photomultipliers for the KM3NeT Neutrino Telescope

    NASA Astrophysics Data System (ADS)

    Aiello, S.; Akrame, S. E.; Ameli, F.; Anassontzis, E. G.; Andre, M.; Androulakis, G.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aublin, J.; Avgitas, T.; Baars, M.; Bagatelas, C.; Barbarino, G.; Baret, B.; Barrios-Martí, J.; Belias, A.; Berbee, E.; van den Berg, A.; Bertin, V.; Biagi, S.; Biagioni, A.; Biernoth, C.; Bormuth, R.; Boumaaza, J.; Bourret, S.; Bouwhuis, M.; Bozza, C.; Brânzaş, H.; Briukhanova, N.; Bruijn, R.; Brunner, J.; Buis, E.; Buompane, R.; Busto, J.; Calvo, D.; Capone, A.; Caramete, L.; Celli, S.; Chabab, M.; Cherubini, S.; Chiarella, V.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coelho, J. A. B.; Coleiro, A.; Colomer Molla, M.; Coniglione, R.; Coyle, P.; Creusot, A.; Cuttone, G.; D'Onofrio, A.; Dallier, R.; De Sio, C.; Di Palma, I.; Díaz, A. F.; Distefano, C.; Domi, A.; Donà, R.; Donzaud, C.; Dornic, D.; Dörr, M.; Durocher, M.; Eberl, T.; van Eijk, D.; El Bojaddaini, I.; Elsaesser, D.; Enzenhöfer, A.; Ferrara, G.; Fusco, L. A.; Gal, T.; Garufi, F.; Gauchery, S.; Geißelsöder, S.; Gialanella, L.; Giorgio, E.; Giuliante, A.; Gozzini, S. R.; Ruiz, R. Gracia; Graf, K.; Grasso, D.; Grégoire, T.; Grella, G.; Hallmann, S.; van Haren, H.; Heid, T.; Heijboer, A.; Hekalo, A.; Hernández-Rey, J. J.; Hofestädt, J.; Illuminati, G.; James, C. W.; Jongen, M.; Jongewaard, B.; de Jong, M.; de Jong, P.; Kadler, M.; Kalaczyński, P.; Kalekin, O.; Katz, U. F.; Chowdhury, N. R. Khan; Kieft, G.; Kießling, D.; Koffeman, E. N.; Kooijman, P.; Kouchner, A.; Kreter, M.; Kulikovskiy, V.; Lahmann, R.; Le Breton, R.; Leone, F.; Leonora, E.; Levi, G.; Lincetto, M.; Lonardo, A.; Longhitano, F.; Lotze, M.; Loucatos, S.; Maggi, G.; Mańczak, J.; Mannheim, K.; Margiotta, A.; Marinelli, A.; Markou, C.; Martin, L.; Martínez-Mora, J. A.; Martini, A.; Marzaioli, F.; Mele, R.; Melis, K. W.; Migliozzi, P.; Migneco, E.; Mijakowski, P.; Miranda, L. S.; Mollo, C. M.; Morganti, M.; Moser, M.; Moussa, A.; Muller, R.; Musumeci, M.; Nauta, L.; Navas, S.; Nicolau, C. A.; Nielsen, C.; Organokov, M.; Orlando, A.; Panagopoulos, V.; Papalashvili, G.; Papaleo, R.; Păvălaş, G. E.; Pellegrini, G.; Pellegrino, C.; Pérez Romero, J.; Perrin-Terrin, M.; Piattelli, P.; Pikounis, K.; Pisanti, O.; Poirè, C.; Polydefki, G.; Poma, G. E.; Popa, V.; Post, M.; Pradier, T.; Pühlhofer, G.; Pulvirenti, S.; Quinn, L.; Raffaelli, F.; Randazzo, N.; Razzaque, S.; Real, D.; Resvanis, L.; Reubelt, J.; Riccobene, G.; Richer, M.; Rovelli, A.; Salvadori, I.; Samtleben, D. F. E.; Sánchez Losa, A.; Sanguineti, M.; Santangelo, A.; Sapienza, P.; Schermer, B.; Sciacca, V.; Seneca, J.; Sgura, I.; Shanidze, R.; Sharma, A.; Simeone, F.; Sinopoulou, A.; Spisso, B.; Spurio, M.; Stavropoulos, D.; Steijger, J.; Stellacci, S. M.; Strandberg, B.; Stransky, D.; Stüven, T.; Taiuti, M.; Tatone, F.; Tayalati, Y.; Tenllado, E.; Thakore, T.; Timmer, P.; Trovato, A.; Tsagkli, S.; Tzamariudaki, E.; Tzanetatos, D.; Valieri, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Viola, S.; Vivolo, D.; Volkert, M.; de Waardt, L.; Wilms, J.; de Wolf, E.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.

    2018-05-01

    The Hamamatsu R12199-02 3-inch photomultiplier tube is the photodetector chosen for the first phase of the KM3NeT neutrino telescope. About 7000 photomultipliers have been characterised for dark count rate, timing spread and spurious pulses. The quantum efficiency, the gain and the peak-to-valley ratio have also been measured for a sub-sample in order to determine parameter values needed as input to numerical simulations of the detector.

  13. Indicator system for advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  14. Indicator system for a process plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  15. Console for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  16. Alarm system for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  17. Method of installing a control room console in a nuclear power plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  18. Advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  19. Advanced nuclear plant control room complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  20. A possible formation channel for blue hook stars in globular cluster - II. Effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance

    NASA Astrophysics Data System (ADS)

    Lei, Zhenxin; Zhao, Gang; Zeng, Aihua; Shen, Lihua; Lan, Zhongjian; Jiang, Dengkai; Han, Zhanwen

    2016-12-01

    Employing tidally enhanced stellar wind, we studied in binaries the effects of metallicity, mass ratio of primary to secondary, tidal enhancement efficiency and helium abundance on the formation of blue hook (BHk) stars in globular clusters (GCs). A total of 28 sets of binary models combined with different input parameters are studied. For each set of binary model, we presented a range of initial orbital periods that is needed to produce BHk stars in binaries. All the binary models could produce BHk stars within different range of initial orbital periods. We also compared our results with the observation in the Teff-logg diagram of GC NGC 2808 and ω Cen. Most of the BHk stars in these two GCs locate well in the region predicted by our theoretical models, especially when C/N-enhanced model atmospheres are considered. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in binaries, while metallicity and helium abundance would play important roles, especially for helium abundance. Specifically, with helium abundance increasing in binary models, the space range of initial orbital periods needed to produce BHk stars becomes obviously wider, regardless of other input parameters adopted. Our results were discussed with recent observations and other theoretical models.

  1. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, Donald C.

    1996-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.

  2. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, D.C.

    1996-12-17

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.

  3. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  4. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of IMX 101 Components

    DTIC Science & Technology

    2017-05-01

    ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity

  5. The second iteration of the Systems Prioritization Method: A systems prioritization and decision-aiding tool for the Waste Isolation Pilot Plant: Volume 2, Summary of technical input and model implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prindle, N.H.; Mendenhall, F.T.; Trauth, K.

    1996-05-01

    The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the secondmore » iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis.« less

  6. National assessment of geologic carbon dioxide storage resources: data

    USGS Publications Warehouse

    ,

    2013-01-01

    In 2012, the U.S. Geological Survey (USGS) completed the national assessment of geologic carbon dioxide storage resources. Its data and results are reported in three publications: the assessment data publication (this report), the assessment results publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013a, USGS Circular 1386), and the assessment summary publication (U.S. Geological Survey Geologic Carbon Dioxide Storage Resources Assessment Team, 2013b, USGS Fact Sheet 2013–3020). This data publication supports the results publication and contains (1) individual storage assessment unit (SAU) input data forms with all input parameters and details on the allocation of the SAU surface land area by State and general land-ownership category; (2) figures representing the distribution of all storage classes for each SAU; (3) a table containing most input data and assessment result values for each SAU; and (4) a pairwise correlation matrix specifying geological and methodological dependencies between SAUs that are needed for aggregation of results.

  7. The impact of 14nm photomask variability and uncertainty on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-09-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.

  8. Simulation models in population breast cancer screening: A systematic review.

    PubMed

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Piloted Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.

  10. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  11. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  12. Preliminary investigation of the effects of eruption source parameters on volcanic ash transport and dispersion modeling using HYSPLIT

    NASA Astrophysics Data System (ADS)

    Stunder, B.

    2009-12-01

    Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.

  13. Searching for New Physics via CP Violation in B → ππ

    NASA Astrophysics Data System (ADS)

    London, David; Sinha, Nita; Sinha, Rahul

    We show how B → ππ decays can be used to search for new physics in the b → d flavour-changing neutral current. One needs one piece of theoretical input, which we take to be a prediction for P/T, the ratio of the penguin and tree amplitudes in B_d^0 to {π ^ + }{π ^ - }. If present, new physics can be detected over most of the parameter space. If α (ɸ2) can be obtained independently, measurements of B+ → π+π0 and B_d^0/overline {B_d^0} to {π ^0}{π ^0} are not even needed.

  14. A Sensitivity Analysis of fMRI Balloon Model.

    PubMed

    Zayane, Chadia; Laleg-Kirati, Taous Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  15. Real-air data reduction procedures based on flow parameters measured in the test section of supersonic and hypersonic facilities

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1972-01-01

    Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.

  16. iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spycher, Nicolas; Finsterle, Stefan

    GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less

  17. INDES User's guide multistep input design with nonlinear rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.

  18. A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.

    PubMed

    Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan

    2016-03-01

    Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.

  19. Incorporating uncertainty in RADTRAN 6.0 input files.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John

    Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less

  20. Automated crystallographic system for high-throughput protein structure determination.

    PubMed

    Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F

    2003-07-01

    High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.

  1. Development of V/STOL methodology based on a higher order panel method

    NASA Technical Reports Server (NTRS)

    Bhateley, I. C.; Howell, G. A.; Mann, H. W.

    1983-01-01

    The development of a computational technique to predict the complex flowfields of V/STOL aircraft was initiated in which a number of modules and a potential flow aerodynamic code were combined in a comprehensive computer program. The modules were developed in a building-block approach to assist the user in preparing the geometric input and to compute parameters needed to simulate certain flow phenomena that cannot be handled directly within a potential flow code. The PAN AIR aerodynamic code, which is higher order panel method, forms the nucleus of this program. PAN AIR's extensive capability for allowing generalized boundary conditions allows the modules to interact with the aerodynamic code through the input and output files, thereby requiring no changes to the basic code and easy replacement of updated modules.

  2. Hubert: Software for efficient analysis of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2016-10-01

    Combination of short data acquisition time and local investigation of a solid state through hyperfine parameters makes nuclear forward scattering (NFS) a unique experimental technique for investigation of fast processes. However, the total number of acquired NFS time spectra may be very high. Therefore an efficient way of the data evaluation is needed. In this paper we report the development of Hubert software package as a response to the rapidly developing field of in-situ NFS experiments. Hubert offers several useful features for data files processing and could significantly shorten the evaluation time by using a simple connection between the neighboring time spectra through their input and output parameter values.

  3. Electronic Structure and Transport in Magnetic Multilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2008-02-18

    ORNL assisted Seagate Recording Heads Operations in the development of CIPS pin Valves for application as read sensors in hard disk drives. Personnel at ORNL were W. H. Butler and Xiaoguang Zhang. Dr. Olle Heinonen from Seagate RHO also participated. ORNL provided codes and materials parameters that were used by Seagate to model CIP GMR in their heads. The objectives were to: (1) develop a linearized Boltzmann transport code for describing CIP GMR based on realistic models of the band structure and interfaces in materials in CIP spin valves in disk drive heads; (2) calculate the materials parameters needed asmore » inputs to the Boltzmann code; and (3) transfer the technology to Seagate Recording Heads.« less

  4. The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs

    NASA Astrophysics Data System (ADS)

    Gede, M.

    2009-04-01

    A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.

  5. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  6. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal.

    PubMed

    Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.

  8. Development and Characterization of a Rate-Dependent Three-Dimensional Macroscopic Plasticity Model Suitable for Use in Composite Impact Problems

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther

    2015-01-01

    Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.

  9. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  10. Towards systematic evaluation of crop model outputs for global land-use models

    NASA Astrophysics Data System (ADS)

    Leclere, David; Azevedo, Ligia B.; Skalský, Rastislav; Balkovič, Juraj; Havlík, Petr

    2016-04-01

    Land provides vital socioeconomic resources to the society, however at the cost of large environmental degradations. Global integrated models combining high resolution global gridded crop models (GGCMs) and global economic models (GEMs) are increasingly being used to inform sustainable solution for agricultural land-use. However, little effort has yet been done to evaluate and compare the accuracy of GGCM outputs. In addition, GGCM datasets require a large amount of parameters whose values and their variability across space are weakly constrained: increasing the accuracy of such dataset has a very high computing cost. Innovative evaluation methods are required both to ground credibility to the global integrated models, and to allow efficient parameter specification of GGCMs. We propose an evaluation strategy for GGCM datasets in the perspective of use in GEMs, illustrated with preliminary results from a novel dataset (the Hypercube) generated by the EPIC GGCM and used in the GLOBIOM land use GEM to inform on present-day crop yield, water and nutrient input needs for 16 crops x 15 management intensities, at a spatial resolution of 5 arc-minutes. We adopt the following principle: evaluation should provide a transparent diagnosis of model adequacy for its intended use. We briefly describe how the Hypercube data is generated and how it articulates with GLOBIOM in order to transparently identify the performances to be evaluated, as well as the main assumptions and data processing involved. Expected performances include adequately representing the sub-national heterogeneity in crop yield and input needs: i) in space, ii) across crop species, and iii) across management intensities. We will present and discuss measures of these expected performances and weight the relative contribution of crop model, input data and data processing steps in performances. We will also compare obtained yield gaps and main yield-limiting factors against the M3 dataset. Next steps include iterative improvement of parameter assumptions and evaluation of implications of GGCM performances for intended use in the IIASA EPIC-GLOBIOM model cluster. Our approach helps targeting future efforts at improving GGCM accuracy and would achieve highest efficiency if combined with traditional field-scale evaluation and sensitivity analysis.

  11. The dynamics of integrate-and-fire: mean versus variance modulations and dependence on baseline parameters.

    PubMed

    Pressley, Joanna; Troyer, Todd W

    2011-05-01

    The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.

  12. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  13. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    NASA Astrophysics Data System (ADS)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  14. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  15. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  16. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  17. Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment

    NASA Astrophysics Data System (ADS)

    Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.

    2015-01-01

    In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.

  18. ACME Priority Metrics (A-PRIME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Zender, Charlie; Van Roekel, Luke

    A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.

  19. Experimental and numerical study on optimization of the single point incremental forming of AINSI 304L stainless steel sheet

    NASA Astrophysics Data System (ADS)

    Saidi, B.; Giraud-Moreau, L.; Cherouat, A.; Nasri, R.

    2017-09-01

    AINSI 304L stainless steel sheets are commonly formed into a variety of shapes for applications in the industrial, architectural, transportation and automobile fields, it’s also used for manufacturing of denture base. In the field of dentistry, there is a need for personalized devises that are custom made for the patient. The single point incremental forming process is highly promising in this area for manufacturing of denture base. The single point incremental forming process (ISF) is an emerging process based on the use of a spherical tool, which is moved along CNC controlled tool path. One of the major advantages of this process is the ability to program several punch trajectories on the same machine in order to obtain different shapes. Several applications of this process exist in the medical field for the manufacturing of personalized titanium prosthesis (cranial plate, knee prosthesis...) due to the need of product customization to each patient. The objective of this paper is to study the incremental forming of AISI 304L stainless steel sheets for future applications in the dentistry field. During the incremental forming process, considerable forces can occur. The control of the forming force is particularly important to ensure the safe use of the CNC milling machine and preserve the tooling and machinery. In this paper, the effect of four different process parameters on the maximum force is studied. The proposed approach consists in using an experimental design based on experimental results. An analysis of variance was conducted with ANOVA to find the input parameters allowing to minimize the maximum forming force. A numerical simulation of the incremental forming process is performed with the optimal input process parameters. Numerical results are compared with the experimental ones.

  20. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  1. CosmoSIS: Modular cosmological parameter estimation

    DOE PAGES

    Zuntz, J.; Paterno, M.; Jennings, E.; ...

    2015-06-09

    Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less

  2. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  3. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs

  4. A Design of Experiments Approach Defining the Relationships Between Processing and Microstructure for Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.

    2004-01-01

    A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .

  5. Next-Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible

  6. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William; Fitzgerald, Matthew; Stahl, Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.

  7. Next Generation Lightweight Mirror Modeling Software

    NASA Technical Reports Server (NTRS)

    Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip

    2013-01-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.

  8. On the physical properties of volcanic rock masses

    NASA Astrophysics Data System (ADS)

    Heap, M. J.; Villeneuve, M.; Ball, J. L.; Got, J. L.

    2017-12-01

    The physical properties (e.g., elastic properties, porosity, permeability, cohesion, strength, amongst others) of volcanic rocks are crucial input parameters for modelling volcanic processes. These parameters, however, are often poorly constrained and there is an apparent disconnect between modellers and those who measure/determine rock and rock mass properties. Although it is well known that laboratory measurements are scale dependent, experimentalists, field volcanologists, and modellers should work together to provide the most appropriate model input parameters. Our pluridisciplinary approach consists of (1) discussing with modellers to better understand their needs, (2) using experimental know-how to build an extensive database of volcanic rock properties, and (3) using geotechnical and field-based volcanological know-how to address scaling issues. For instance, increasing the lengthscale of interest from the laboratory-scale to the volcano-scale will reduce the elastic modulus and strength and increase permeability, but to what extent? How variable are the physical properties of volcanic rocks, and is it appropriate to assume constant, isotropic, and/or homogeneous values for volcanoes? How do alteration, depth, and temperature influence rock physical and mechanical properties? Is rock type important, or do rock properties such as porosity exert a greater control on such parameters? How do we upscale these laboratory-measured properties to rock mass properties using the "fracturedness" of a volcano or volcanic outcrop, and how do we quantify fracturedness? We hope to discuss and, where possible, address some of these issues through active discussion between two (or more) scientific communities.

  9. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  10. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    NASA Astrophysics Data System (ADS)

    Phanomchoeng, Gridsada

    A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.

  11. Quantifying uncertainty and sensitivity in sea ice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  12. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal

    PubMed Central

    Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215

  13. IDEA: Interactive Display for Evolutionary Analyses.

    PubMed

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-12-08

    The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  14. IDEA: Interactive Display for Evolutionary Analyses

    PubMed Central

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-01-01

    Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data. PMID:19061522

  15. Application of artificial neural networks to assess pesticide contamination in shallow groundwater

    USGS Publications Warehouse

    Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.

    2006-01-01

    In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.

  16. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    NASA Technical Reports Server (NTRS)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  17. Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates

    NASA Technical Reports Server (NTRS)

    Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.

    1997-01-01

    Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.

  18. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  19. Temperature effects on the universal equation of state of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Smith, J. R.; Rose, J. H.

    1986-01-01

    Recently it has been argued based on theoretical calculations and experimental data that there is a universal form for the equation of state of solids. This observation was restricted to the range of temperatures and pressures such that there are no phase transitions. The use of this universal relation to estimate pressure-volume relations (i.e., isotherms) required three input parameters at each fixed temperature. It is shown that for many solids the input data needed to predict high temperature thermodynamical properties can be dramatically reduced. In particular, only four numbers are needed: (1) the zero pressure (P=0) isothermal bulk modulus; (2)it P=0 pressure derivative; (3) the P=0 volume; and (4) the P=0 thermal expansion; all evaluated at a single (reference) temperature. Explicit predictions are made for the high temperature isotherms, the thermal expansion as a function of temperature, and the temperature variation of the isothermal bulk modulus and its pressure derivative. These predictions are tested using experimental data for three representative solids: gold, sodium chloride, and xenon. Good agreement between theory and experiment is found.

  20. Temperature effects on the universal equation of state of solids

    NASA Technical Reports Server (NTRS)

    Vinet, Pascal; Ferrante, John; Smith, John R.; Rose, James H.

    1987-01-01

    Recently it has been argued based on theoretical calculations and experimental data that there is a universal form for the equation of state of solids. This observation was restricted to the range of temperatures and pressures such that there are no phase transitions. The use of this universal relation to estimate pressure-volume relations (i.e., isotherms) required three input parameters at each fixed temperature. It is shown that for many solids the input data needed to predict high temperature thermodynamical properties can be dramatically reduced. In particular, only four numbers are needed: (1) the zero pressure (P = 0) isothermal bulk modulus; (2) its P = 0 pressure derivative; (3) the P = 0 volume; and (4) the P = 0 thermal expansion; all evaluated at a single (reference) temperature. Explicit predictions are made for the high temperature isotherms, the thermal expansion as a function of temperature, and the temperature variation of the isothermal bulk modulus and its pressure derivative. These predictions are tested using experimental data for three representative solids: gold, sodium chloride, and xenon. Good agreement between theory and experiment is found.

  1. Clustering analysis of moving target signatures

    NASA Astrophysics Data System (ADS)

    Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto

    2010-04-01

    Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.

  2. Sculpt test problem analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweetser, John David

    2013-10-01

    This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less

  3. A novel cost-effectiveness model of prescription eicosapentaenoic acid extrapolated to secondary prevention of cardiovascular diseases in the United States.

    PubMed

    Philip, Sephy; Chowdhury, Sumita; Nelson, John R; Benjamin Everett, P; Hulme-Lowe, Carolyn K; Schmier, Jordana K

    2016-10-01

    Given the substantial economic and health burden of cardiovascular disease and the residual cardiovascular risk that remains despite statin therapy, adjunctive therapies are needed. The purpose of this model was to estimate the cost-effectiveness of high-purity prescription eicosapentaenoic acid (EPA) omega-3 fatty acid intervention in secondary prevention of cardiovascular diseases in statin-treated patient populations extrapolated to the US. The deterministic model utilized inputs for cardiovascular events, costs, and utilities from published sources. Expert opinion was used when assumptions were required. The model takes the perspective of a US commercial, third-party payer with costs presented in 2014 US dollars. The model extends to 5 years and applies a 3% discount rate to costs and benefits. Sensitivity analyses were conducted to explore the influence of various input parameters on costs and outcomes. Using base case parameters, EPA-plus-statin therapy compared with statin monotherapy resulted in cost savings (total 5-year costs $29,393 vs $30,587 per person, respectively) and improved utilities (average 3.627 vs 3.575, respectively). The results were not sensitive to multiple variations in model inputs and consistently identified EPA-plus-statin therapy to be the economically dominant strategy, with both lower costs and better patient utilities over the modeled 5-year period. The model is only an approximation of reality and does not capture all complexities of a real-world scenario without further inputs from ongoing trials. The model may under-estimate the cost-effectiveness of EPA-plus-statin therapy because it allows only a single event per patient. This novel model suggests that combining EPA with statin therapy for secondary prevention of cardiovascular disease in the US may be a cost-saving and more compelling intervention than statin monotherapy.

  4. Knowledge system and method for simulating chemical controlled release device performance

    DOEpatents

    Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.

    1991-01-01

    A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.

  5. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  6. Sizing the science data processing requirements for EOS

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi

    1991-01-01

    The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.

  7. Supporting the operational use of process based hydrological models and NASA Earth Observations for use in land management and post-fire remediation through a Rapid Response Erosion Database (RRED).

    NASA Astrophysics Data System (ADS)

    Miller, M. E.; Elliot, W.; Billmire, M.; Robichaud, P. R.; Banach, D. M.

    2017-12-01

    We have built a Rapid Response Erosion Database (RRED, http://rred.mtri.org/rred/) for the continental United States to allow land managers to access properly formatted spatial model inputs for the Water Erosion Prediction Project (WEPP). Spatially-explicit process-based models like WEPP require spatial inputs that include digital elevation models (DEMs), soil, climate and land cover. The online database delivers either a 10m or 30m USGS DEM, land cover derived from the Landfire project, and soil data derived from SSURGO and STATSGO datasets. The spatial layers are projected into UTM coordinates and pre-registered for modeling. WEPP soil parameter files are also created along with linkage files to match both spatial land cover and soils data with the appropriate WEPP parameter files. Our goal is to make process-based models more accessible by preparing spatial inputs ahead of time allowing modelers to focus on addressing scenarios of concern. The database provides comprehensive support for post-fire hydrological modeling by allowing users to upload spatial soil burn severity maps, and within moments returns spatial model inputs. Rapid response is critical following natural disasters. After moderate and high severity wildfires, flooding, erosion, and debris flows are a major threat to life, property and municipal water supplies. Mitigation measures must be rapidly implemented if they are to be effective, but they are expensive and cannot be applied everywhere. Fire, runoff, and erosion risks also are highly heterogeneous in space, creating an urgent need for rapid, spatially-explicit assessment. The database has been used to help assess and plan remediation on over a dozen wildfires in the Western US. Future plans include expanding spatial coverage, improving model input data and supporting additional models. Our goal is to facilitate the use of the best possible datasets and models to support the conservation of soil and water.

  8. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  9. Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Batterson, James G. (Technical Monitor); Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.

  10. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  11. Wake Vortex Inverse Model User's Guide

    NASA Technical Reports Server (NTRS)

    Lai, David; Delisi, Donald

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input file, with preferred parameters values, is given in Appendix A. An example of the plot generated at a normal completion of the inversion is shown in Appendix B.

  12. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  13. Abnormal externally guided movement preparation in recent-onset schizophrenia is associated with impaired selective attention to external input.

    PubMed

    Smid, Henderikus G O M; Westenbroek, Joanna M; Bruggeman, Richard; Knegtering, Henderikus; Van den Bosch, Robert J

    2009-11-30

    Several theories propose that the primary cognitive impairment in schizophrenia concerns a deficit in the processing of external input information. There is also evidence, however, for impaired motor preparation in schizophrenia. This provokes the question whether the impaired motor preparation in schizophrenia is a secondary consequence of disturbed (selective) processing of the input needed for that preparation, or an independent primary deficit. The aim of the present study was to discriminate between these hypotheses, by investigating externally guided movement preparation in relation to selective stimulus processing. The sample comprised 16 recent-onset schizophrenia patients and 16 controls who performed a movement-precuing task. In this task, a precue delivered information about one, two or no parameters of a movement summoned by a subsequent stimulus. Performance measures and measures derived from the electroencephalogram showed that patients yielded smaller benefits from the precues and showed less cue-based preparatory activity in advance of the imperative stimulus than the controls, suggesting a response preparation deficit. However, patients also showed less activity reflecting selective attention to the precue. We therefore conclude that the existing evidence for an impairment of externally guided motor preparation in schizophrenia is most likely due to a deficit in selective attention to the external input, which lends support to theories proposing that the primary cognitive deficit in schizophrenia concerns the processing of input information.

  14. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  15. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  16. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  17. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  18. Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems.

    PubMed

    Liu, Yan-Jun; Tang, Li; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan

    2015-01-01

    Based on the neural network (NN) approximator, an online reinforcement learning algorithm is proposed for a class of affine multiple input and multiple output (MIMO) nonlinear discrete-time systems with unknown functions and disturbances. In the design procedure, two networks are provided where one is an action network to generate an optimal control signal and the other is a critic network to approximate the cost function. An optimal control signal and adaptation laws can be generated based on two NNs. In the previous approaches, the weights of critic and action networks are updated based on the gradient descent rule and the estimations of optimal weight vectors are directly adjusted in the design. Consequently, compared with the existing results, the main contributions of this paper are: 1) only two parameters are needed to be adjusted, and thus the number of the adaptation laws is smaller than the previous results and 2) the updating parameters do not depend on the number of the subsystems for MIMO systems and the tuning rules are replaced by adjusting the norms on optimal weight vectors in both action and critic networks. It is proven that the tracking errors, the adaptation laws, and the control inputs are uniformly bounded using Lyapunov analysis method. The simulation examples are employed to illustrate the effectiveness of the proposed algorithm.

  19. Interfield dysbalances in research input and output benchmarking: Visualisation by density equalizing procedures

    PubMed Central

    Groneberg-Kloft, Beatrix; Kreiter, Carolin; Welte, Tobias; Fischer, Axel; Quarcoo, David; Scutaru, Cristian

    2008-01-01

    Background Historical, social and economic reasons can lead to major differences in the allocation of health system resources and research funding. These differences might endanger the progress in diagnostic and therapeutic approaches of socio-economic important diseases. The present study aimed to assess different benchmarking approaches that might be used to analyse these disproportions. Research in two categories was analysed for various output parameters and compared to input parameters. Germany was used as a high income model country. For the areas of cardiovascular and respiratory medicine density equalizing mapping procedures visualized major geographical differences in both input and output markers. Results An imbalance in the state financial input was present with 36 cardiovascular versus 8 respiratory medicine state-financed full clinical university departments at the C4/W3 salary level. The imbalance in financial input is paralleled by an imbalance in overall quantitative output figures: The 36 cardiology chairs published 2708 articles in comparison to 453 articles published by the 8 respiratory medicine chairs in the period between 2002 and 2006. This is a ratio of 75.2 articles per cardiology chair and 56.63 articles per respiratory medicine chair. A similar trend is also present in the qualitative measures. Here, the 2708 cardiology publications were cited 48337 times (7290 times for respiratory medicine) which is an average citation of 17.85 per publication vs. 16.09 for respiratory medicine. The average number of citations per cardiology chair was 1342.69 in contrast to 911.25 citations per respiratory medicine chair. Further comparison of the contribution of the 16 different German states revealed major geographical differences concerning numbers of chairs, published items, total number of citations and average citations. Conclusion Despite similar significances of cardiovascular and respiratory diseases for the global burden of disease, large input and output imbalances have been revealed in the present study which point to a need for changes in funding policies. The present study supplies data that could be used for decision making in the field of health systems funding. PMID:18724868

  20. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  1. NetMOD Version 2.0 Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This document describes the parameters that are used to configure the NetMOD tool and the input and output parameters that make up the simulation definitions.« less

  2. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    PubMed Central

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  3. Orbit control of a stratospheric satellite with parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Xu, Ming; Huo, Wei

    2016-12-01

    When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.

  4. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  5. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    NASA Astrophysics Data System (ADS)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  6. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  7. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  8. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    PubMed

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  9. Application of PBPK modelling in drug discovery and development at Pfizer.

    PubMed

    Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B

    2012-01-01

    Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.

  10. Electronic structure, dielectric response, and surface charge distribution of RGD (1FUV) peptide.

    PubMed

    Adhikari, Puja; Wen, Amy M; French, Roger H; Parsegian, V Adrian; Steinmetz, Nicole F; Podgornik, Rudolf; Ching, Wai-Yim

    2014-07-08

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor.

  11. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  12. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  13. Heat Transfer Model for Hot Air Balloons

    NASA Astrophysics Data System (ADS)

    Llado-Gambin, Adriana

    A heat transfer model and analysis for hot air balloons is presented in this work, backed with a flow simulation using SolidWorks. The objective is to understand the major heat losses in the balloon and to identify the parameters that affect most its flight performance. Results show that more than 70% of the heat losses are due to the emitted radiation from the balloon envelope and that convection losses represent around 20% of the total. A simulated heating source is also included in the modeling based on typical thermal input from a balloon propane burner. The burner duty cycle to keep a constant altitude can vary from 10% to 28% depending on the atmospheric conditions, and the ambient temperature is the parameter that most affects the total thermal input needed. The simulation and analysis also predict that the gas temperature inside the balloon decreases at a rate of -0.25 K/s when there is no burner activity, and it increases at a rate of +1 K/s when the balloon pilot operates the burner. The results were compared to actual flight data and they show very good agreement indicating that the major physical processes responsible for balloon performance aloft are accurately captured in the simulation.

  14. Reconstruction of Twist Torque in Main Parachute Risers

    NASA Technical Reports Server (NTRS)

    Day, Joshua D.

    2015-01-01

    The reconstruction of twist torque in the Main Parachute Risers of the Capsule Parachute Assembly System (CPAS) has been successfully used to validate CPAS Model Memo conservative twist torque equations. Reconstruction of basic, one degree of freedom drop tests was used to create a functional process for the evaluation of more complex, rigid body simulation. The roll, pitch, and yaw of the body, the fly-out angles of the parachutes, and the relative location of the parachutes to the body are inputs to the torque simulation. The data collected by the Inertial Measurement Unit (IMU) was used to calculate the true torque. The simulation then used photogrammetric and IMU data as inputs into the Model Memo equations. The results were then compared to the true torque results to validate the Model Memo equations. The Model Memo parameters were based off of steel risers and the parameters will need to be re-evaluated for different materials. Photogrammetric data was found to be more accurate than the inertial data in accounting for the relative rotation between payload and cluster. The Model Memo equations were generally a good match and when not matching were generally conservative.

  15. Optimize Short Term load Forcasting Anomalous Based Feed Forward Backpropagation

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Abdullah, A. G.; Rohmah, K. A.

    2017-03-01

    This paper contains the Short-Term Load Forecasting (STLF) using artificial neural network especially feed forward back propagation algorithm which is particularly optimized in order to getting a reduced error value result. Electrical load forecasting target is a holiday that hasn’t identical pattern and different from weekday’s pattern, in other words the pattern of holiday load is an anomalous. Under these conditions, the level of forecasting accuracy will be decrease. Hence we need a method that capable to reducing error value in anomalous load forecasting. Learning process of algorithm is supervised or controlled, then some parameters are arranged before performing computation process. Momentum constant a value is set at 0.8 which serve as a reference because it has the greatest converge tendency. Learning rate selection is made up to 2 decimal digits. In addition, hidden layer and input component are tested in several variation of number also. The test result leads to the conclusion that the number of hidden layer impact on the forecasting accuracy and test duration determined by the number of iterations when performing input data until it reaches the maximum of a parameter value.

  16. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  17. Update on ɛK with lattice QCD inputs

    NASA Astrophysics Data System (ADS)

    Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon

    2018-03-01

    We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.

  18. Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns

    PubMed Central

    Franken, Tom P.; Bremen, Peter; Joris, Philip X.

    2014-01-01

    Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037

  19. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  20. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  1. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  2. Assessing the Internal Consistency of the Marine Carbon Dioxide System at High Latitudes: The Labrador Sea AR7W Line Study Case

    NASA Astrophysics Data System (ADS)

    Raimondi, L.; Azetsu-Scott, K.; Wallace, D.

    2016-02-01

    This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.

  3. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  4. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  5. Econometric analysis of fire suppression production functions for large wildland fires

    Treesearch

    Thomas P. Holmes; David E. Calkin

    2013-01-01

    In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...

  6. A mathematical model for predicting fire spread in wildland fuels

    Treesearch

    Richard C. Rothermel

    1972-01-01

    A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.

  7. The application of remote sensing to the development and formulation of hydrologic planning models

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.

    1976-01-01

    A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.

  8. A data-input program (MFI2005) for the U.S. Geological Survey modular groundwater model (MODFLOW-2005) and parameter estimation program (UCODE_2005)

    USGS Publications Warehouse

    Harbaugh, Arien W.

    2011-01-01

    The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.

  9. Adaptive control of Parkinson's state based on a nonlinear computational model with unknown parameters.

    PubMed

    Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan

    2015-02-01

    The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.

  10. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  11. Estimating unknown input parameters when implementing the NGA ground-motion prediction equations in engineering practice

    USGS Publications Warehouse

    Kaklamanos, James; Baise, Laurie G.; Boore, David M.

    2011-01-01

    The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.

  12. Dual side control for inductive power transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron

    An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less

  13. Users manual for AUTOMESH-2D: A program of automatic mesh generation for two-dimensional scattering analysis by the finite element method

    NASA Technical Reports Server (NTRS)

    Hua, Chongyu; Volakis, John L.

    1990-01-01

    AUTOMESH-2D is a computer program specifically designed as a preprocessor for the scattering analysis of two dimensional bodies by the finite element method. This program was developed due to a need for reproducing the effort required to define and check the geometry data, element topology, and material properties. There are six modules in the program: (1) Parameter Specification; (2) Data Input; (3) Node Generation; (4) Element Generation; (5) Mesh Smoothing; and (5) Data File Generation.

  14. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  16. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.

  17. Uncertainty in BMP evaluation and optimization for watershed management

    NASA Astrophysics Data System (ADS)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.

  18. Input design for identification of aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Hall, W. E., Jr.

    1975-01-01

    An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.

  19. Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems

    NASA Technical Reports Server (NTRS)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.

  20. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  1. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Distributed Adaptive Containment Control for a Class of Nonlinear Multiagent Systems With Input Quantization.

    PubMed

    Wang, Chenliang; Wen, Changyun; Hu, Qinglei; Wang, Wei; Zhang, Xiuyu

    2018-06-01

    This paper is devoted to distributed adaptive containment control for a class of nonlinear multiagent systems with input quantization. By employing a matrix factorization and a novel matrix normalization technique, some assumptions involving control gain matrices in existing results are relaxed. By fusing the techniques of sliding mode control and backstepping control, a two-step design method is proposed to construct controllers and, with the aid of neural networks, all system nonlinearities are allowed to be unknown. Moreover, a linear time-varying model and a similarity transformation are introduced to circumvent the obstacle brought by quantization, and the controllers need no information about the quantizer parameters. The proposed scheme is able to ensure the boundedness of all closed-loop signals and steer the containment errors into an arbitrarily small residual set. The simulation results illustrate the effectiveness of the scheme.

  3. Improved ground hydrology calculations for global climate models (GCMs) - Soil water movement and evapotranspiration

    NASA Technical Reports Server (NTRS)

    Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.

    1988-01-01

    A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.

  4. Financial analysis of biogas utilization : input cattle, pig feces and coffee waste in Karo, Indonesia

    NASA Astrophysics Data System (ADS)

    Ginting, N.; Zuhri, F.; Hasnudi; Mirwandhono, E.; Sembiring, I.; Daulay, A. H.

    2018-02-01

    The community's need for renewable energy was very urgent. In addition, efforts to preserve the environment from waste caused biogas technology feasible to apply. This study aims to provide biogas technology with minimal cost and utilize agricultural waste that were coffee and livestock waste. The study was conducted from July to October 2016. The theoretical and empirical methods used in this study were included data from officials resources, field survey on 16 biogas locations, focus group discussion and interview with stake holders. Data were tabulated by Excel Program which then were analysed by SAS. Parameters were included Production Cost, Production Result, Profit Loss Analysis, Revenue Cost Ratio (R/C Ratio), Return On Investment (ROI), Net B/C, and IRR. The result of this research showed that the application of bioplastic gas with cow dung and coffee waste as bioplasticgas input cause the best results.

  5. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  7. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  8. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  9. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  10. Meter circuit for tuning RF amplifiers

    NASA Technical Reports Server (NTRS)

    Longthorne, J. E.

    1973-01-01

    Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.

  11. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  12. VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)

    NASA Astrophysics Data System (ADS)

    Garcia Munoz, A.; Mills, F. P.

    2014-08-01

    Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).

  13. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.

  14. The efficiency of ultrasonic oscillations transfer into the load

    NASA Astrophysics Data System (ADS)

    Abramov, O. V.; Abramov, V. O.; Mullakaev, M. S.; Artem'ev, V. V.

    2009-11-01

    The results of ultrasonic action to the substances have been presented. It is examined, the correlation between the electrical parameters of ultrasonic equipment and acoustic performances of the ultrasonic field in treating the medium, the efficiency of ultrasonic technological facility, and the peculiarities of oscillations introduced into the load under cavitation development. The correlation between the acoustic powers of oscillations securing the needed level of cavitation and desired technological effect, and the electrical parameters of the ultrasonic facility, first of all, the power, is established. The peculiarities of cavitation development in liquids with different physical-chemical properties (including the molten low-melting metals) have been studied, and the acoustic power of oscillations introduced into the load under input variation of electric power to the generator has been also estimated.

  15. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  16. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  17. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  18. Emissions-critical charge cooling using an organic rankine cycle

    DOEpatents

    Ernst, Timothy C.; Nelson, Christopher R.

    2014-07-15

    The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.

  19. Master control data handling program uses automatic data input

    NASA Technical Reports Server (NTRS)

    Alliston, W.; Daniel, J.

    1967-01-01

    General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.

  20. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  1. Rapid Risk Evaluation (ER2) Using MS Excel Spreadsheet: a Case Study of Fredericton (new Brunswick, Canada)

    NASA Astrophysics Data System (ADS)

    McGrath, H.; Stefanakis, E.; Nastev, M.

    2016-06-01

    Conventional knowledge of the flood hazard alone (extent and frequency) is not sufficient for informed decision-making. The public safety community needs tools and guidance to adequately undertake flood hazard risk assessment in order to estimate respective damages and social and economic losses. While many complex computer models have been developed for flood risk assessment, they require highly trained personnel to prepare the necessary input (hazard, inventory of the built environment, and vulnerabilities) and analyze model outputs. As such, tools which utilize open-source software or are built within popular desktop software programs are appealing alternatives. The recently developed Rapid Risk Evaluation (ER2) application runs scenario based loss assessment analyses in a Microsoft Excel spreadsheet. User input is limited to a handful of intuitive drop-down menus utilized to describe the building type, age, occupancy and the expected water level. In anticipation of local depth damage curves and other needed vulnerability parameters, those from the U.S. FEMA's Hazus-Flood software have been imported and temporarily accessed in conjunction with user input to display exposure and estimated economic losses related to the structure and the content of the building. Building types and occupancies representative of those most exposed to flooding in Fredericton (New Brunswick) were introduced and test flood scenarios were run. The algorithm was successfully validated against results from the Hazus-Flood model for the same building types and flood depths.

  2. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    NASA Astrophysics Data System (ADS)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.

  3. Program for User-Friendly Management of Input and Output Data Sets

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard

    2003-01-01

    A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.

  4. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  5. The Lesioned Spinal Cord Is a “New” Spinal Cord: Evidence from Functional Changes after Spinal Injury in Lamprey

    PubMed Central

    Parker, David

    2017-01-01

    Finding a treatment for spinal cord injury (SCI) focuses on reconnecting the spinal cord by promoting regeneration across the lesion site. However, while regeneration is necessary for recovery, on its own it may not be sufficient. This presumably reflects the requirement for regenerated inputs to interact appropriately with the spinal cord, making sub-lesion network properties an additional influence on recovery. This review summarizes work we have done in the lamprey, a model system for SCI research. We have compared locomotor behavior (swimming) and the properties of descending inputs, locomotor networks, and sensory inputs in unlesioned animals and animals that have received complete spinal cord lesions. In the majority (∼90%) of animals swimming parameters after lesioning recovered to match those in unlesioned animals. Synaptic inputs from individual regenerated axons also matched the properties in unlesioned animals, although this was associated with changes in release parameters. This suggests against any compensation at these synapses for the reduced descending drive that will occur given that regeneration is always incomplete. Compensation instead seems to occur through diverse changes in cellular and synaptic properties in locomotor networks and proprioceptive systems below, but also above, the lesion site. Recovery of locomotor performance is thus not simply the reconnection of the two sides of the spinal cord, but reflects a distributed and varied range of spinal cord changes. While locomotor network changes are insufficient on their own for recovery, they may facilitate locomotor outputs by compensating for the reduction in descending drive. Potentiated sensory feedback may in turn be a necessary adaptation that monitors and adjusts the output from the “new” locomotor network. Rather than a single aspect, changes in different components of the motor system and their interactions may be needed after SCI. If these are general features, and where comparisons with mammalian systems can be made effects seem to be conserved, improving functional recovery in higher vertebrates will require interventions that generate the optimal spinal cord conditions conducive to recovery. The analyses needed to identify these conditions are difficult in the mammalian spinal cord, but lower vertebrate systems should help to identify the principles of the optimal spinal cord response to injury. PMID:29163065

  6. On-chip skin color detection using a triple-well CMOS process

    NASA Astrophysics Data System (ADS)

    Boussaid, Farid; Chai, Douglas; Bouzerdoum, Abdesselam

    2004-03-01

    In this paper, a current-mode VLSI architecture enabling on read-out skin detection without the need for any on-chip memory elements is proposed. An important feature of the proposed architecture is that it removes the need for demosaicing. Color separation is achieved using the strong wavelength dependence of the absorption coefficient in silicon. This wavelength dependence causes a very shallow absorption of blue light and enables red light to penetrate deeply in silicon. A triple-well process, allowing a P-well to be placed inside an N-well, is chosen to fabricate three vertically integrated photodiodes acting as the RGB color detector for each pixel. Pixels of an input RGB image are classified as skin or non-skin pixels using a statistical skin color model, chosen to offer an acceptable trade-off between skin detection performance and implementation complexity. A single processing unit is used to classify all pixels of the input RGB image. This results in reduced mismatch and also in an increased pixel fill-factor. Furthermore, the proposed current-mode architecture is programmable, allowing external control of all classifier parameters to compensate for mismatch and changing lighting conditions.

  7. Prediction of Welded Joint Strength in Plasma Arc Welding: A Comparative Study Using Back-Propagation and Radial Basis Neural Networks

    NASA Astrophysics Data System (ADS)

    Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.

    2016-09-01

    Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.

  8. Multi-response optimization of process parameters for GTAW process in dissimilar welding of Incoloy 800HT and P91 steel by using grey relational analysis

    NASA Astrophysics Data System (ADS)

    vellaichamy, Lakshmanan; Paulraj, Sathiya

    2018-02-01

    The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.

  9. Calibration of discrete element model parameters: soybeans

    NASA Astrophysics Data System (ADS)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  10. AIRCRAFT REACTOR CONTROL SYSTEM APPLICABLE TO TURBOJET AND TURBOPROP POWER PLANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorker, G.E.

    1955-07-19

    Control systems proposed for direct cycle nuclear powered aircraft commonly involve control of engine speed, nuclear energy input, and chcmical energy input. A system in which these parameters are controlled by controlling the total energy input, the ratio of nuclear and chemical energy input, and the engine speed is proposed. The system is equally applicable to turbojet or turboprop applications. (auth)

  11. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  12. How Much Input Do You Need to Learn the Most Frequent 9,000 Words?

    ERIC Educational Resources Information Center

    Nation, Paul

    2014-01-01

    This study looks at how much input is needed to gain enough repetition of the 1st 9,000 words of English for learning to occur. It uses corpora of various sizes and composition to see how many tokens of input would be needed to gain at least twelve repetitions and to meet most of the words at eight of the nine 1000 word family levels. Corpus sizes…

  13. The significance of spatial variability of rainfall on streamflow: A synthetic analysis at the Upper Lee catchment, UK

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, Ilias; McIntyre, Neil; Wheater, Howard

    2017-04-01

    Rainfall, one of the main inputs in hydrological modeling, is a highly heterogeneous process over a wide range of scales in space, and hence the ignorance of the spatial rainfall information could affect the simulated streamflow. Calibration of hydrological model parameters is rarely a straightforward task due to parameter equifinality and parameters' 'nature' to compensate for other uncertainties, i.e. structural and forcing input. In here, we analyse the significance of spatial variability of rainfall on streamflow as a function of catchment scale and type, and antecedent conditions using the continuous time, semi-distributed PDM hydrological model at the Upper Lee catchment, UK. The impact of catchment scale and type is assessed using 11 nested catchments ranging in scale from 25 to 1040 km2, and further assessed by artificially changing the catchment characteristics and translating these to model parameters with uncertainty using model regionalisation. Synthetic rainfall events are introduced to directly relate the change in simulated streamflow to the spatial variability of rainfall. Overall, we conclude that the antecedent catchment wetness and catchment type play an important role in controlling the significance of the spatial distribution of rainfall on streamflow. Results show a relationship between hydrograph characteristics (streamflow peak and volume) and the degree of spatial variability of rainfall for the impermeable catchments under dry antecedent conditions, although this decreases at larger scales; however this sensitivity is significantly undermined under wet antecedent conditions. Although there is indication that the impact of spatial rainfall on streamflow varies as a function of catchment scale, the variability of antecedent conditions between the synthetic catchments seems to mask this significance. Finally, hydrograph responses to different spatial patterns in rainfall depend on assumptions used for model parameter estimation and also the spatial variation in parameters indicating the need of an uncertainty framework in such investigation.

  14. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  15. Study of eigenfrequencies with the help of Prony's method

    NASA Astrophysics Data System (ADS)

    Drobakhin, O. O.; Olevskyi, O. V.; Olevskyi, V. I.

    2017-10-01

    Eigenfrequencies can be crucial in the design of a construction. They define many parameters that determine limit parameters of the structure. Exceeding these values can lead to the structural failure of an object. It is especially important in the design of structures which support heavy equipment or are subjected to the forces of airflow. One of the most effective ways to acquire the frequencies' values is a computer-based numerical simulation. The existing methods do not allow to acquire the whole range of needed parameters. It is well known that Prony's method, is highly effective for the investigation of dynamic processes. Thus, it is rational to adapt Prony's method for such investigation. The Prony method has advantage in comparison with other numerical schemes because it provides the possibility to process not only the results of numerical simulation, but also real experimental data. The research was carried out for a computer model of a steel plate. The input data was obtained by using the Dassault Systems SolidWorks computer package with the Simulation add-on. We investigated the acquired input data with the help of Prony's method. The result of the numerical experiment shows that Prony's method can be used to investigate the mechanical eigenfrequencies with good accuracy. The output of Prony's method not only contains the information about values of frequencies themselves, but also contains data regarding the amplitudes, initial phases and decaying factors of any given mode of oscillation, which can also be used in engineering.

  16. Effect of Heat Input on Geometry of Austenitic Stainless Steel Weld Bead on Low Carbon Steel

    NASA Astrophysics Data System (ADS)

    Saha, Manas Kumar; Hazra, Ritesh; Mondal, Ajit; Das, Santanu

    2018-05-01

    Among different weld cladding processes, gas metal arc welding (GMAW) cladding becomes a cost effective, user friendly, versatile method for protecting the surface of relatively lower grade structural steels from corrosion and/or erosion wear by depositing high grade stainless steels onto them. The quality of cladding largely depends upon the bead geometry of the weldment deposited. Weld bead geometry parameters, like bead width, reinforcement height, depth of penetration, and ratios like reinforcement form factor (RFF) and penetration shape factor (PSF) determine the quality of the weld bead geometry. Various process parameters of gas metal arc welding like heat input, current, voltage, arc travel speed, mode of metal transfer, etc. influence formation of bead geometry. In the current experimental investigation, austenite stainless steel (316) weld beads are formed on low alloy structural steel (E350) by GMAW using 100% CO2 as the shielding gas. Different combinations of current, voltage and arc travel speed are chosen so that heat input increases from 0.35 to 0.75 kJ/mm. Nine number of weld beads are deposited and replicated twice. The observations show that weld bead width increases linearly with increase in heat input, whereas reinforcement height and depth of penetration do not increase with increase in heat input. Regression analysis is done to establish the relationship between heat input and different geometrical parameters of weld bead. The regression models developed agrees well with the experimental data. Within the domain of the present experiment, it is observed that at higher heat input, the weld bead gets wider having little change in penetration and reinforcement; therefore, higher heat input may be recommended for austenitic stainless steel cladding on low alloy steel.

  17. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa

    2018-05-01

    The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.

  18. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  19. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  20. Combining in silico evolution and nonlinear dimensionality reduction to redesign responses of signaling networks

    NASA Astrophysics Data System (ADS)

    Prescott, Aaron M.; Abel, Steven M.

    2016-12-01

    The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.

  1. Building a USGS National Crustal Model: Theoretical foundation, inputs, and calibration for the Western United States

    NASA Astrophysics Data System (ADS)

    Shah, A. K.; Boyd, O. S.; Sowers, T.; Thompson, E.

    2017-12-01

    Seismic hazard assessments depend on an accurate prediction of ground motion, which in turn depends on a base knowledge of three-dimensional variations in density, seismic velocity, and attenuation. We are building a National Crustal Model (NCM) using a physical theoretical foundation, 3-D geologic model, and measured data for calibration. An initial version of the NCM for the western U.S. is planned to be available in mid-2018 and for the remainder of the U.S. in 2019. The theoretical foundation of the NCM couples Biot-Gassmann theory for the porous composite with mineral physics calculations for the solid mineral matrix. The 3-D geologic model is defined through integration of results from a range of previous studies including maps of surficial porosity, surface and subsurface lithology, and the depths to bedrock and crystalline basement or seismic equivalent. The depths to bedrock and basement are estimated using well, seismic, and gravity data; in many cases these data are compiled by combining previous studies. Two parameters controlling how porosity changes with depth are assumed to be a function of lithology and calibrated using measured shear- and compressional-wave velocity and density profiles. Uncertainties in parameters derived from the model increase with depth and are dependent on the quantity and quality of input data sets. An interface to the model provides parameters needed for ground motion prediction equations in the Western U.S., including, for example, the time-averaged shear-wave velocity in the upper 30 meters (VS30) and the depths to 1.0 and 2.5 km/s shear-wave speeds (Z1.0 and Z2.5), which have a very rough correlation to the depths to bedrock and basement, as well as interpolated 3D models for use with various Urban Hazard Mapping strategies. We compare parameters needed for ground motion prediction equations including VS30, Z1.0, and Z2.5 between those derived from existing models, for example, 3-D velocity models for southern California available from the Southern California Earthquake Center, and those derived from the NCM and assess their ability to reduce the variance of observed ground motions.

  2. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  3. Modern control concepts in hydrology

    NASA Technical Reports Server (NTRS)

    Duong, N.; Johnson, G. R.; Winn, C. B.

    1974-01-01

    Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  4. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE PAGES

    Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...

    2015-11-23

    Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  5. Broadband distortion modeling in Lyman-α forest BAO fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu

    2015-11-01

    In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  7. Experimental Validation of Strategy for the Inverse Estimation of Mechanical Properties and Coefficient of Friction in Flat Rolling

    NASA Astrophysics Data System (ADS)

    Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker

    2017-08-01

    Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.

  8. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  9. Next generation lightweight mirror modeling software

    NASA Astrophysics Data System (ADS)

    Arnold, William R.; Fitzgerald, Matthew; Rosa, Rubin Jaca; Stahl, H. Philip

    2013-09-01

    The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 3-5 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any text editor, all the shell thickness parameters and suspension spring rates are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.

  10. Influence of tool geometry and processing parameters on welding defects and mechanical properties for friction stir welding of 6061 Aluminium alloy

    NASA Astrophysics Data System (ADS)

    Daneji, A.; Ali, M.; Pervaiz, S.

    2018-04-01

    Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.

  11. The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vercesi, J.; Surian, E.

    Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less

  12. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  13. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  14. Hearing AIDS and music.

    PubMed

    Chasin, Marshall; Russo, Frank A

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.

  15. GRODY - GAMMA RAY OBSERVATORY DYNAMICS SIMULATOR IN ADA

    NASA Technical Reports Server (NTRS)

    Stark, M.

    1994-01-01

    Analysts use a dynamics simulator to test the attitude control system algorithms used by a satellite. The simulator must simulate the hardware, dynamics, and environment of the particular spacecraft and provide user services which enable the analyst to conduct experiments. Researchers at Goddard's Flight Dynamics Division developed GRODY alongside GROSS (GSC-13147), a FORTRAN simulator which performs the same functions, in a case study to assess the feasibility and effectiveness of the Ada programming language for flight dynamics software development. They used popular object-oriented design techniques to link the simulator's design with its function. GRODY is designed for analysts familiar with spacecraft attitude analysis. The program supports maneuver planning as well as analytical testing and evaluation of the attitude determination and control system used on board the Gamma Ray Observatory (GRO) satellite. GRODY simulates the GRO on-board computer and Control Processor Electronics. The analyst/user sets up and controls the simulation. GRODY allows the analyst to check and update parameter values and ground commands, obtain simulation status displays, interrupt the simulation, analyze previous runs, and obtain printed output of simulation runs. The video terminal screen display allows visibility of command sequences, full-screen display and modification of parameters using input fields, and verification of all input data. Data input available for modification includes alignment and performance parameters for all attitude hardware, simulation control parameters which determine simulation scheduling and simulator output, initial conditions, and on-board computer commands. GRODY generates eight types of output: simulation results data set, analysis report, parameter report, simulation report, status display, plots, diagnostic output (which helps the user trace any problems that have occurred during a simulation), and a permanent log of all runs and errors. The analyst can send results output in graphical or tabular form to a terminal, disk, or hardcopy device, and can choose to have any or all items plotted against time or against each other. Goddard researchers developed GRODY on a VAX 8600 running VMS version 4.0. For near real time performance, GRODY requires a VAX at least as powerful as a model 8600 running VMS 4.0 or a later version. To use GRODY, the VAX needs an Ada Compilation System (ACS), Code Management System (CMS), and 1200K memory. GRODY is written in Ada and FORTRAN.

  16. A Spreadsheet Simulation Tool for Terrestrial and Planetary Balloon Design

    NASA Technical Reports Server (NTRS)

    Raquea, Steven M.

    1999-01-01

    During the early stages of new balloon design and development, it is necessary to conduct many trade studies. These trade studies are required to determine the design space, and aid significantly in determining overall feasibility. Numerous point designs then need to be generated as details of payloads, materials, mission, and manufacturing are determined. To accomplish these numerous designs, transient models are both unnecessary and time intensive. A steady state model that uses appropriate design inputs to generate system-level descriptive parameters can be very flexible and fast. Just such a steady state model has been developed and has been used during both the MABS 2001 Mars balloon study and the Ultra Long Duration Balloon Project. Using Microsoft Excel's built-in iteration routine, a model was built. Separate sheets were used for performance, structural design, materials, and thermal analysis as well as input and output sheets. As can be seen from figure 1, the model takes basic performance requirements, weight estimates, design parameters, and environmental conditions and generates a system level balloon design. Figure 2 shows a sample output of the model. By changing the inputs and a few of the equations in the model, balloons on earth or other planets can be modeled. There are currently several variations of the model for terrestrial and Mars balloons, as well there are versions of the model that perform crude material design based on strength and weight requirements. To perform trade studies, the Visual Basic language built into Excel was used to create an automated matrix of designs. This trade study module allows a three dimensional trade surface to be generated by using a series of values for any two design variables. Once the fixed and variable inputs are defined, the model automatically steps through the input matrix and fills a spreadsheet with the resulting point designs. The proposed paper will describe the model in detail, including current variations. The assumptions, governing equations, and capabilities will be addressed. Detailed examples of the model in practice will also be used.

  17. Piezoelectric transformers for low-voltage generation of gas discharges and ionic winds in atmospheric air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael J.; Go, David B., E-mail: dgo@nd.edu; Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indianapolis 46556

    To generate a gas discharge (plasma) in atmospheric air requires an electric field that exceeds the breakdown threshold of ∼30 kV/cm. Because of safety, size, or cost constraints, the large applied voltages required to generate such fields are often prohibitive for portable applications. In this work, piezoelectric transformers are used to amplify a low input applied voltage (<30 V) to generate breakdown in air without the need for conventional high-voltage electrical equipment. Piezoelectric transformers (PTs) use their inherent electromechanical resonance to produce a voltage amplification, such that the surface of the piezoelectric exhibits a large surface voltage that can generate corona-like dischargesmore » on its corners or on adjacent electrodes. In the proper configuration, these discharges can be used to generate a bulk air flow called an ionic wind. In this work, PT-driven discharges are characterized by measuring the discharge current and the velocity of the induced ionic wind with ionic winds generated using input voltages as low as 7 V. The characteristics of the discharge change as the input voltage increases; this modifies the resonance of the system and subsequent required operating parameters.« less

  18. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  19. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  20. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  1. Femtosecond soliton source with fast and broad spectral tunability.

    PubMed

    Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E

    2009-03-15

    We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.

  2. Performance limitations of a white light extrinsic Fabry-Perot interferometric displacement sensor

    NASA Astrophysics Data System (ADS)

    Moro, Erik A.; Todd, Michael D.; Puckett, Anthony D.

    2012-06-01

    Non-contacting interferometric fiber optic sensors offer a minimally invasive, high-accuracy means of measuring a structure's kinematic response to loading. The performance of interferometric sensors is often dictated by the technique employed for demodulating the kinematic measurand of interest from phase in the observed optical signal. In this paper a white-light extrinsic Fabry-Perot interferometer is implemented, offering robust displacement sensing performance. Displacement data is extracted from an estimate of the power spectral density, calculated from the interferometer's received optical power measured as a function of optical transmission frequency, and the sensor's performance is dictated by the details surrounding the implementation of this power spectral density estimation. One advantage of this particular type of interferometric sensor is that many of its control parameters (e.g., frequency range, frequency sampling density, sampling rate, etc.) may be chosen to so that the sensor satisfies application-specific performance needs in metrics such as bandwidth, axial displacement range, displacement resolution, and accuracy. A suite of user-controlled input values is investigated for estimating the spectrum of power versus wavelength data, and the relationships between performance metrics and input parameters are described in an effort to characterize the sensor's operational performance limitations. This work has been approved by Los Alamos National Laboratory for unlimited public release (LA-UR 12-01512).

  3. Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.

    2014-08-01

    In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less

  4. Vastly accelerated linear least-squares fitting with numerical optimization for dual-input delay-compensated quantitative liver perfusion mapping.

    PubMed

    Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal

    2018-04-01

    To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. FAST: Fitting and Assessment of Synthetic Templates

    NASA Astrophysics Data System (ADS)

    Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis

    2018-03-01

    FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.

  6. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  7. Simulations of Brady's-Type Fault Undergoing CO2 Push-Pull: Pressure-Transient and Sensitivity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Doughty, Christine

    Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less

  8. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  9. The space shuttle payload planning working groups. Volume 7: Earth observations

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The findings of the Earth Observations working group of the space shuttle payload planning activity are presented. The objectives of the Earth Observation experiments are: (1) establishment of quantitative relationships between observable parameters and geophysical variables, (2) development, test, calibration, and evaluation of eventual flight instruments in experimental space flight missions, (3) demonstration of the operational utility of specific observation concepts or techniques as information inputs needed for taking actions, and (4) deployment of prototype and follow-on operational Earth Observation systems. The basic payload capability, mission duration, launch sites, inclinations, and payload limitations are defined.

  10. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  11. Comment on ;Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods; [J. Hydrol., 546, 437-449, 10.1016/j.jhydrol.2017.01.025

    NASA Astrophysics Data System (ADS)

    Barati, Reza

    2017-07-01

    Perumal et al. (2017) compared the performances of the variable parameter McCarthy-Muskingum (VPMM) model of Perumal and Price (2013) and the nonlinear Muskingum (NLM) model of Gill (1978) using hypothetical inflow hydrographs in an artificial channel. As input parameters, first model needs the initial condition, upstream boundary condition, Manning's roughness coefficient, length of the routing reach, cross-sections of the river reach and the bed slope, while the latter one requires the initial condition, upstream boundary condition and the hydrologic parameters (three parameters which can be calibrated using flood hydrographs of the upstream and downstream sections). The VPMM model was examined by available Manning's roughness values, whereas the NLM model was tested in both calibration and validation steps. As final conclusion, Perumal et al. (2017) claimed that the NLM model should be retired from the literature of the Muskingum model. While the author's intention is laudable, this comment examines some important issues in the subject matter of the original study.

  12. Electronic Structure, Dielectric Response, and Surface Charge Distribution of RGD (1FUV) Peptide

    PubMed Central

    Adhikari, Puja; Wen, Amy M.; French, Roger H.; Parsegian, V. Adrian; Steinmetz, Nicole F.; Podgornik, Rudolf; Ching, Wai-Yim

    2014-01-01

    Long and short range molecular interactions govern molecular recognition and self-assembly of biological macromolecules. Microscopic parameters in the theories of these molecular interactions are either phenomenological or need to be calculated within a microscopic theory. We report a unified methodology for the ab initio quantum mechanical (QM) calculation that yields all the microscopic parameters, namely the partial charges as well as the frequency-dependent dielectric response function, that can then be taken as input for macroscopic theories of electrostatic, polar, and van der Waals-London dispersion intermolecular forces. We apply this methodology to obtain the electronic structure of the cyclic tripeptide RGD-4C (1FUV). This ab initio unified methodology yields the relevant parameters entering the long range interactions of biological macromolecules, providing accurate data for the partial charge distribution and the frequency-dependent dielectric response function of this peptide. These microscopic parameters determine the range and strength of the intricate intermolecular interactions between potential docking sites of the RGD-4C ligand and its integrin receptor. PMID:25001596

  13. Evaluation of trade influence on economic growth rate by computational intelligence approach

    NASA Astrophysics Data System (ADS)

    Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor

    2017-01-01

    In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.

  14. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  15. 'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2017-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  16. Development of an Object-Oriented Turbomachinery Analysis Code within the NPSS Framework

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.

    2014-01-01

    During the preliminary or conceptual design phase of an aircraft engine, the turbomachinery designer has a need to estimate the effects of a large number of design parameters such as flow size, stage count, blade count, radial position, etc. on the weight and efficiency of a turbomachine. Computer codes are invariably used to perform this task however, such codes are often very old, written in outdated languages with arcane input files, and rarely adaptable to new architectures or unconventional layouts. Given the need to perform these kinds of preliminary design trades, a modern 2-D turbomachinery design and analysis code has been written using the Numerical Propulsion System Simulation (NPSS) framework. This paper discusses the development of the governing equations and the structure of the primary objects used in OTAC.

  17. A PC-based single-ADC multi-parameter data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodring, M.; Kegel, G.H.R.; Egan, J.J.

    1995-10-01

    A personal computer (PC) based mult parameter data acquisition system using the Microsoft Window operating environment has been designed and constructed. An IBI AT compatible personal computer with an Intel 486DX5 microprocessor was combined with a National Instruments ATIDIO 32 digital I/O card, a single Canberra 8713 ADC with 13-bit resolution and a modified Canberra 8223 8-input analog multiplexer to acquil data from experiments carried out at the UML Van de Graa accelerator. The accelerator data acquisition (ADAC) computer environment was programmed in Microsoft Visual BASIC for use i Windows. ADAC allows event-mode data acquisition with up to eight parametersmore » (modifiable to 64) and the simultaneous display parameters during acquisition. Additional features of ADAC include replay of event-mode data and graphical analysis/display of data. TV ADAC environment is easy to upgrade or expand, inexpensive 1 implement, and is specifically designed to meet the needs of nuclei spectroscopy.« less

  18. Density functional theory of freezing of a system of highly elongated ellipsoidal oligomer solutions

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shikha; Mishra, Pankaj

    2017-05-01

    We have used the density functional theory of freezing to study the liquid crystalline phase behavior of a system of highly elongated ellipsoidal conjugated oligomers dispersed in three different solvents namely chloroform, toluene and their equimolar mixture. The molecules are assumed to interact via solvent-implicit coarse-grained Gay-Berne potential. Pair correlation functions needed as input in the density functional theory have been calculated using the Percus-Yevick (PY) integral equation theory. Considering the isotropic and nematic phases, we have calculated the isotropic-nematic phase transition parameters and presented the temperature-density and pressure-temperature phase diagrams. Different solvent conditions are found not only to affect the transition parameters but also determine the capability of oligomers to form nematic phase in various thermodynamic conditions. In principle, our results are verifiable through computer simulations.

  19. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  20. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  1. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  2. Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2011-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.

  3. HEAT INPUT AND POST WELD HEAT TREATMENT EFFECTS ON REDUCED-ACTIVATION FERRITIC/MARTENSITIC STEEL FRICTION STIR WELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Wei; Chen, Gaoqiang; Chen, Jian

    Reduced-activation ferritic/martensitic (RAFM) steels are an important class of structural materials for fusion reactor internals developed in recent years because of their improved irradiation resistance. However, they can suffer from welding induced property degradations. In this paper, a solid phase joining technology friction stir welding (FSW) was adopted to join a RAFM steel Eurofer 97 and different FSW parameters/heat input were chosen to produce welds. FSW response parameters, joint microstructures and microhardness were investigated to reveal relationships among welding heat input, weld structure characterization and mechanical properties. In general, FSW heat input results in high hardness inside the stir zonemore » mostly due to a martensitic transformation. It is possible to produce friction stir welds similar to but not with exactly the same base metal hardness when using low power input because of other hardening mechanisms. Further, post weld heat treatment (PWHT) is a very effective way to reduce FSW stir zone hardness values.« less

  4. On the reliability of voltage and power as input parameters for the characterization of high power ultrasound applications

    NASA Astrophysics Data System (ADS)

    Haller, Julian; Wilkens, Volker

    2012-11-01

    For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.

  5. Constraining the symmetry energy with heavy-ion collisions and Bayesian analysis

    NASA Astrophysics Data System (ADS)

    Tsang, C. Y.; Jhang, G.; Morfouace, P.; Lynch, W. G.; Tsang, M. B.; HiRA Collaboration

    2017-09-01

    To extract constraints on symmetry energy terms in nuclear Equation of State (EoS), data from heavy ion reactions, are often compared to calculations from transport models. As multiple model input parameters are needed in the transport model, it is necessary to do multi-parameter analysis to understand the relationship especially if strong correlations exist among the parameters. In this talk, I will discuss how four symmetry energy parameters, So, (Symmetry energy) and L (slope) at saturation density as well as the nucleon scaler effective mass (ms*) and the nucleon effective mass splitting, (FI) are obtained by comparing transport mode results with experimental data such as isospin diffusions and n/p spectral ratios using MADAI Bayesian analysis software. Probability of each parameter having a certain value given experimental data can be calculated with Bayes theorem by Markov Chain Monte Carlo integration. Results using single and double ratios of neutron and proton spectra from 124Sn +124Sn, 112Sn +112Sn collisions at 120 MeV/u as well as isospin diffusion from Sn +Sn isotopes, at 50 and 35 MeV/u will be presented. This research is supported by the National Science Foundation under Grant No. PHY-1565546.

  6. A seasonal Bartlett-Lewis Rectangular Pulse model

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter

    2016-04-01

    Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.

  7. Handling Input and Output for COAMPS

    NASA Technical Reports Server (NTRS)

    Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine

    2007-01-01

    Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.

  8. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  9. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  10. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  11. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  12. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  13. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  14. Development and weighting of a life cycle assessment screening model

    NASA Astrophysics Data System (ADS)

    Bates, Wayne E.; O'Shaughnessy, James; Johnson, Sharon A.; Sisson, Richard

    2004-02-01

    Nearly all life cycle assessment tools available today are high priced, comprehensive and quantitative models requiring a significant amount of data collection and data input. In addition, most of the available software packages require a great deal of training time to learn how to operate the model software. Even after this time investment, results are not guaranteed because of the number of estimations and assumptions often necessary to run the model. As a result, product development, design teams and environmental specialists need a simplified tool that will allow for the qualitative evaluation and "screening" of various design options. This paper presents the development and design of a generic, qualitative life cycle screening model and demonstrates its applicability and ease of use. The model uses qualitative environmental, health and safety factors, based on site or product-specific issues, to sensitize the overall results for a given set of conditions. The paper also evaluates the impact of different population input ranking values on model output. The final analysis is based on site or product-specific variables. The user can then evaluate various design changes and the apparent impact or improvement on the environment, health and safety, compliance cost and overall corporate liability. Major input parameters can be varied, and factors such as materials use, pollution prevention, waste minimization, worker safety, product life, environmental impacts, return of investment, and recycle are evaluated. The flexibility of the model format will be discussed in order to demonstrate the applicability and usefulness within nearly any industry sector. Finally, an example using audience input value scores will be compared to other population input results.

  15. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  16. The effect of word prediction settings (frequency of use) on text input speed in persons with cervical spinal cord injury: a prospective study.

    PubMed

    Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel

    2017-06-01

    To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation   Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.

  17. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  18. Reservoir computing with a single time-delay autonomous Boolean node

    NASA Astrophysics Data System (ADS)

    Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.

    2015-02-01

    We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.

  19. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  20. Computer program for single input-output, single-loop feedback systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.

  1. A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Quigley, Patricia Allison

    Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Tianyu; Xu, Hongyi; Chen, Wei

    Fiber-reinforced polymer composites are strong candidates for structural materials to replace steel and light alloys in lightweight vehicle design because of their low density and relatively high strength. In the integrated computational materials engineering (ICME) development of carbon fiber composites, microstructure reconstruction algorithms are needed to generate material microstructure representative volume element (RVE) based on the material processing information. The microstructure RVE reconstruction enables the material property prediction by finite element analysis (FEA)This paper presents an algorithm to reconstruct the microstructure of a chopped carbon fiber/epoxy laminate material system produced by compression molding, normally known as sheet molding compounds (SMC).more » The algorithm takes the result from material’s manufacturing process as inputs, such as the orientation tensor of fibers, the chopped fiber sheet geometry, and the fiber volume fraction. The chopped fiber sheets are treated as deformable rectangle chips and a random packing algorithm is developed to pack these chips into a square plate. The RVE is built in a layer-by-layer fashion until the desired number of lamina is reached, then a fine tuning process is applied to finalize the reconstruction. Compared to the previous methods, this new approach has the ability to model bended fibers by allowing limited amount of overlaps of rectangle chips. Furthermore, the method does not need SMC microstructure images, for which the image-based characterization techniques have not been mature enough, as inputs. Case studies are performed and the results show that the statistics of the reconstructed microstructures generated by the algorithm matches well with the target input parameters from processing.« less

  3. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  4. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    NASA Astrophysics Data System (ADS)

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve the spatial structure in the inverse model, which leads to better parameter estimates and improved predictions when using the inverse-conditioned realizations of parameter fields.

  5. Support vector machines-based modelling of seismic liquefaction potential

    NASA Astrophysics Data System (ADS)

    Pal, Mahesh

    2006-08-01

    This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.

  6. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  7. Parameter Balancing in Kinetic Models of Cell Metabolism†

    PubMed Central

    2010-01-01

    Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890

  8. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  9. Propagation of a channelized debris-flow: experimental investigation and parameters identification for numerical modelling

    NASA Astrophysics Data System (ADS)

    Termini, Donatella

    2013-04-01

    Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Zongrui; Stocks, George Malcolm

    The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less

  11. System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot

    NASA Technical Reports Server (NTRS)

    Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)

    2015-01-01

    A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.

  12. Particle parameter analyzing system. [x-y plotter circuits and display

    NASA Technical Reports Server (NTRS)

    Hansen, D. O.; Roy, N. L. (Inventor)

    1969-01-01

    An X-Y plotter circuit apparatus is described which displays an input pulse representing particle parameter information, that would ordinarily appear on the screen of an oscilloscope as a rectangular pulse, as a single dot positioned on the screen where the upper right hand corner of the input pulse would have appeared. If another event occurs, and it is desired to display this event, the apparatus is provided to replace the dot with a short horizontal line.

  13. Hearing Aids and Music

    PubMed Central

    Chasin, Marshall; Russo, Frank A.

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032

  14. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  15. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  16. SRB Data and Information

    Atmospheric Science Data Center

    2017-01-13

    ... grid. Model inputs of cloud amounts and other atmospheric state parameters are also available in some of the data sets. Primary inputs to ... Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. SRB products are reformatted for the use of ...

  17. Numerical simulation of multi-rifled tube drawing - finding proper feedstock dimensions and tool geometry

    NASA Astrophysics Data System (ADS)

    Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'

    2017-02-01

    Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.

  18. A High Efficiency Boost Converter with MPPT Scheme for Low Voltage Thermoelectric Energy Harvesting

    NASA Astrophysics Data System (ADS)

    Guan, Mingjie; Wang, Kunpeng; Zhu, Qingyuan; Liao, Wei-Hsin

    2016-11-01

    Using thermoelectric elements to harvest energy from heat has been of great interest during the last decade. This paper presents a direct current-direct current (DC-DC) boost converter with a maximum power point tracking (MPPT) scheme for low input voltage thermoelectric energy harvesting applications. Zero current switch technique is applied in the proposed MPPT scheme. Theoretical analysis on the converter circuits is explored to derive the equations for parameters needed in the design of the boost converter. Simulations and experiments are carried out to verify the theoretical analysis and equations. A prototype of the designed converter is built using discrete components and a low-power microcontroller. The results show that the designed converter can achieve a high efficiency at low input voltage. The experimental efficiency of the designed converter is compared with a commercial converter solution. It is shown that the designed converter has a higher efficiency than the commercial solution in the considered voltage range.

  19. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  20. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  1. Non-intrusive parameter identification procedure user's guide

    NASA Technical Reports Server (NTRS)

    Hanson, G. D.; Jewell, W. F.

    1983-01-01

    Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.

  2. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  3. Improved Ground Hydrology Calculations for Global Climate Models (GCMs): Soil Water Movement and Evapotranspiration.

    NASA Astrophysics Data System (ADS)

    Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.

    1988-09-01

    A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.

  4. A Study on the Effects of Spatial Scale on Snow Process in Hyper-Resolution Hydrological Modelling over Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.

    2017-12-01

    Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.

  5. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  6. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  7. Anthropogenic activities impact on atmospheric environmental quality in a gas-flaring community: application of fuzzy logic modelling concept.

    PubMed

    Akintola, Olayiwola Akin; Sangodoyin, Abimbola Yisau; Agunbiade, Foluso Oyedotun

    2018-05-24

    We present a modelling concept for evaluating the impacts of anthropogenic activities suspected to be from gas flaring on the quality of the atmosphere using domestic roof-harvested rainwater (DRHRW) as indicator. We analysed seven metals (Cu, Cd, Pb, Zn, Fe, Ca, and Mg) and six water quality parameters (acidity, PO 4 3- , SO 4 2- , NO 3 - , Cl - , and pH). These were used as input parameters in 12 sampling points from gas-flaring environments (Port Harcourt, Nigeria) using Ibadan as reference. We formulated the results of these input parameters into membership function fuzzy matrices based on four degrees of impact: extremely high, high, medium, and low, using regulatory limits as criteria. We generated indices that classified the degree of anthropogenic activity impact on the sites from the product membership function matrices and weight matrices, with investigated (gas-flaring) environment as between medium and high impact compared to those from reference (residential) environment that was classified as between low and medium impact. Major contaminants of concern found in the harvested rainwater were Pb and Cd. There is also the urgent need to stop gas-flaring activities in Port Harcourt area in particular and Niger Delta region of Nigeria in general, so as to minimise the untold health hazard that people living in the area are currently faced with. The fuzzy methodology presented has also indicated that the water cannot safely support potable uses and should not be consumed without purification due to the impact of anthropogenic activities in the area but may be useful for other domestic purposes.

  8. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  9. StimDuino: an Arduino-based electrophysiological stimulus isolator.

    PubMed

    Sheinin, Anton; Lavi, Ayal; Michaelevski, Izhak

    2015-03-30

    Electrical stimulus isolator is a widely used device in electrophysiology. The timing of the stimulus application is usually automated and controlled by the external device or acquisition software; however, the intensity of the stimulus is adjusted manually. Inaccuracy, lack of reproducibility and no automation of the experimental protocol are disadvantages of the manual adjustment. To overcome these shortcomings, we developed StimDuino, an inexpensive Arduino-controlled stimulus isolator allowing highly accurate, reproducible automated setting of the stimulation current. The intensity of the stimulation current delivered by StimDuino is controlled by Arduino, an open-source microcontroller development platform. The automatic stimulation patterns are software-controlled and the parameters are set from Matlab-coded simple, intuitive and user-friendly graphical user interface. The software also allows remote control of the device over the network. Electrical current measurements showed that StimDuino produces the requested current output with high accuracy. In both hippocampal slice and in vivo recordings, the fEPSP measurements obtained with StimDuino and the commercial stimulus isolators showed high correlation. Commercial stimulus isolators are manually managed, while StimDuino generates automatic stimulation patterns with increasing current intensity. The pattern is utilized for the input-output relationship analysis, necessary for assessment of excitability. In contrast to StimuDuino, not all commercial devices are capable for remote control of the parameters and stimulation process. StimDuino-generated automation of the input-output relationship assessment eliminates need for the current intensity manually adjusting, improves stimulation reproducibility, accuracy and allows on-site and remote control of the stimulation parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    NASA Astrophysics Data System (ADS)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  11. Evaluation of FEM engineering parameters from insitu tests

    DOT National Transportation Integrated Search

    2001-12-01

    The study looked critically at insitu test methods (SPT, CPT, DMT, and PMT) as a means for developing finite element constitutive model input parameters. The first phase of the study examined insitu test derived parameters with laboratory triaxial te...

  12. A robust momentum management and attitude control system for the space station

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.; Rhee, Ihnseok

    1991-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  13. Enhancement of CFD validation exercise along the roof profile of a low-rise building

    NASA Astrophysics Data System (ADS)

    Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.

    2018-04-01

    The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.

  14. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  15. On the fusion of tuning parameters of fuzzy rules and neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.

  16. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  17. Effect of input signal and filter parameters on patterning effect in a semiconductor optical amplifier

    NASA Astrophysics Data System (ADS)

    Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta

    2013-11-01

    A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.

  18. Robust momentum management and attitude control system for the Space Station

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1992-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  19. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  20. Active Learning to Understand Infectious Disease Models and Improve Policy Making

    PubMed Central

    Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-01-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387

  1. Active learning to understand infectious disease models and improve policy making.

    PubMed

    Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-04-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.

  2. Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.

    PubMed

    Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark

    2008-04-01

    To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.

  3. Guidance for Selecting Input Parameters in Modeling the Environmental Fate and Transport of Pesticides

    EPA Pesticide Factsheets

    Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.

  4. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  5. Engine control techniques to account for fuel effects

    DOEpatents

    Kumar, Shankar; Frazier, Timothy R.; Stanton, Donald W.; Xu, Yi; Bunting, Bruce G.; Wolf, Leslie R.

    2014-08-26

    A technique for engine control to account for fuel effects including providing an internal combustion engine and a controller to regulate operation thereof, the engine being operable to combust a fuel to produce an exhaust gas; establishing a plurality of fuel property inputs; establishing a plurality of engine performance inputs; generating engine control information as a function of the fuel property inputs and the engine performance inputs; and accessing the engine control information with the controller to regulate at least one engine operating parameter.

  6. Safety monitoring and reactor transient interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hench, J. E.; Fukushima, T. Y.

    1983-12-20

    An apparatus which monitors a subset of control panel inputs in a nuclear reactor power plant, the subset being those indicators of plant status which are of a critical nature during an unusual event. A display (10) is provided for displaying primary information (14) as to whether the core is covered and likely to remain covered, including information as to the status of subsystems needed to cool the core and maintain core integrity. Secondary display information (18,20) is provided which can be viewed selectively for more detailed information when an abnormal condition occurs. The primary display information has messages (24)more » for prompting an operator as to which one of a number of pushbuttons (16) to press to bring up the appropriate secondary display (18,20). The apparatus utilizes a thermal-hydraulic analysis to more accurately determine key parameters (such as water level) from other measured parameters, such as power, pressure, and flow rate.« less

  7. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  8. Six-hourly time series of horizontal troposphere gradients in VLBI analyis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes

    2016-04-01

    Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.

  9. Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual

    DTIC Science & Technology

    1988-12-01

    corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21

  10. Air quality modelling in the Berlin-Brandenburg region using WRF-Chem v3.7.1: sensitivity to resolution of model grid and input data

    NASA Astrophysics Data System (ADS)

    Kuik, Friderike; Lauer, Axel; Churkina, Galina; Denier van der Gon, Hugo A. C.; Fenner, Daniel; Mar, Kathleen A.; Butler, Tim M.

    2016-12-01

    Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km × 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2 m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.

  11. Assessing risk based on uncertain avalanche activity patterns

    NASA Astrophysics Data System (ADS)

    Zeidler, Antonia; Fromm, Reinhard

    2015-04-01

    Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.

  12. SELENA - An open-source tool for seismic risk and loss assessment using a logic tree computation procedure

    NASA Astrophysics Data System (ADS)

    Molina, S.; Lang, D. H.; Lindholm, C. D.

    2010-03-01

    The era of earthquake risk and loss estimation basically began with the seminal paper on hazard by Allin Cornell in 1968. Following the 1971 San Fernando earthquake, the first studies placed strong emphasis on the prediction of human losses (number of casualties and injured used to estimate the needs in terms of health care and shelters in the immediate aftermath of a strong event). In contrast to these early risk modeling efforts, later studies have focused on the disruption of the serviceability of roads, telecommunications and other important lifeline systems. In the 1990s, the National Institute of Building Sciences (NIBS) developed a tool (HAZUS ®99) for the Federal Emergency Management Agency (FEMA), where the goal was to incorporate the best quantitative methodology in earthquake loss estimates. Herein, the current version of the open-source risk and loss estimation software SELENA v4.1 is presented. While using the spectral displacement-based approach (capacity spectrum method), this fully self-contained tool analytically computes the degree of damage on specific building typologies as well as the associated economic losses and number of casualties. The earthquake ground shaking estimates for SELENA v4.1 can be calculated or provided in three different ways: deterministic, probabilistic or based on near-real-time data. The main distinguishing feature of SELENA compared to other risk estimation software tools is that it is implemented in a 'logic tree' computation scheme which accounts for uncertainties of any input (e.g., scenario earthquake parameters, ground-motion prediction equations, soil models) or inventory data (e.g., building typology, capacity curves and fragility functions). The data used in the analysis is assigned with a decimal weighting factor defining the weight of the respective branch of the logic tree. The weighting of the input parameters accounts for the epistemic and aleatoric uncertainties that will always follow the necessary parameterization of the different types of input data. Like previous SELENA versions, SELENA v4.1 is coded in MATLAB which allows for easy dissemination among the scientific-technical community. Furthermore, any user has access to the source code in order to adapt, improve or refine the tool according to his or her particular needs. The handling of SELENA's current version and the provision of input data is customized for an academic environment but which can then support decision-makers of local, state and regional governmental agencies in estimating possible losses from future earthquakes.

  13. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  14. Supercontinuum optimization for dual-soliton based light sources using genetic algorithms in a grid platform.

    PubMed

    Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A

    2014-09-22

    We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.

  15. Flight test maneuvers for closed loop lateral-directional modeling of the F-18 High Alpha Research Vehicle (HARV) using forebody strakes

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.

  16. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  17. Characterization of unbound materials (soils/aggregates) for mechanistic-empirical pavement design guide.

    DOT National Transportation Integrated Search

    2009-02-01

    The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the d...

  18. Origin of the sensitivity in modeling the glide behaviour of dislocations

    DOE PAGES

    Pei, Zongrui; Stocks, George Malcolm

    2018-03-26

    The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less

  19. Effect of Burnishing Parameters on Surface Finish

    NASA Astrophysics Data System (ADS)

    Shirsat, Uddhav; Ahuja, Basant; Dhuttargaon, Mukund

    2017-08-01

    Burnishing is cold working process in which hard balls are pressed against the surface, resulting in improved surface finish. The surface gets compressed and then plasticized. This is a highly finishing process which is becoming more popular. Surface quality of the product improves its aesthetic appearance. The product made up of aluminum material is subjected to burnishing process during which kerosene is used as a lubricant. In this study factors affecting burnishing process such as burnishing force, speed, feed, work piece diameter and ball diameter are considered as input parameters while surface finish is considered as an output parameter In this study, experiments are designed using 25 factorial design in order to analyze the relationship between input and output parameters. The ANOVA technique and F-test are used for further analysis.

  20. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  1. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  2. Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data

    NASA Astrophysics Data System (ADS)

    Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2017-12-01

    We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.

  3. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  4. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  5. Gain control through divisive inhibition prevents abrupt transition to chaos in a neural mass model.

    PubMed

    Papasavvas, Christoforos A; Wang, Yujiang; Trevelyan, Andrew J; Kaiser, Marcus

    2015-09-01

    Experimental results suggest that there are two distinct mechanisms of inhibition in cortical neuronal networks: subtractive and divisive inhibition. They modulate the input-output function of their target neurons either by increasing the input that is needed to reach maximum output or by reducing the gain and the value of maximum output itself, respectively. However, the role of these mechanisms on the dynamics of the network is poorly understood. We introduce a novel population model and numerically investigate the influence of divisive inhibition on network dynamics. Specifically, we focus on the transitions from a state of regular oscillations to a state of chaotic dynamics via period-doubling bifurcations. The model with divisive inhibition exhibits a universal transition rate to chaos (Feigenbaum behavior). In contrast, in an equivalent model without divisive inhibition, transition rates to chaos are not bounded by the universal constant (non-Feigenbaum behavior). This non-Feigenbaum behavior, when only subtractive inhibition is present, is linked to the interaction of bifurcation curves in the parameter space. Indeed, searching the parameter space showed that such interactions are impossible when divisive inhibition is included. Therefore, divisive inhibition prevents non-Feigenbaum behavior and, consequently, any abrupt transition to chaos. The results suggest that the divisive inhibition in neuronal networks could play a crucial role in keeping the states of order and chaos well separated and in preventing the onset of pathological neural dynamics.

  6. Simple analysis of scattering data with the Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kats, E. I.; Muratov, A. R.

    2018-01-01

    In this paper we propose and explore a method of analysis of the scattering experimental data for uniform liquidlike systems. In our pragmatic approach we are not trying to introduce by hands an artificial small parameter to work out a perturbation theory with respect to the known results, e.g., for hard spheres or sticky hard spheres (all the more that in the agreement with the notorious Landau statement, there is no physical small parameter for liquids). Instead of it being guided by the experimental data we are solving the Ornstein-Zernike equation with a trial (variational) form of the interparticle interaction potential. To find all needed correlation functions this variational input is iterated numerically to satisfy the Ornstein-Zernike equation supplemented by a closure relation. Our method is developed for spherically symmetric scattering objects, and our numeric code is written for such a case. However, it can be extended (at the expense of more involved computations and a larger amount of required experimental input information) for nonspherical particles. What is important for our approach is that it is sufficient to know experimental data in a relatively narrow range of the scattering wave vectors (q ) to compute the static structure factor in a much broader range of q . We illustrate by a few model and real experimental examples of the x-ray and neutron scattering data how the approach works.

  7. Development of the Code RITRACKS

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).

  8. A validated model to predict microalgae growth in outdoor pond cultures subjected to fluctuating light intensities and water temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter

    Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as amore » function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.« less

  9. Reply

    NASA Astrophysics Data System (ADS)

    Wang, Zhenming; Shi, Baoping; Kiefer, John D.; Woolery, Edward W.

    2004-06-01

    Musson's comments on our article, ``Communicating with uncertainty: A critical issue with probabilistic seismic hazard analysis'' are an example of myths and misunderstandings. We did not say that probabilistic seismic hazard analysis (PSHA) is a bad method, but we did say that it has some limitations that have significant implications. Our response to these comments follows. There is no consensus on exactly how to select seismological parameters and to assign weights in PSHA. This was one of the conclusions reached by a senior seismic hazard analysis committee [SSHAC, 1997] that included C. A. Cornell, founder of the PSHA methodology. The SSHAC report was reviewed by a panel of the National Research Council and was well accepted by seismologists and engineers. As an example of the lack of consensus, Toro and Silva [2001] produced seismic hazard maps for the central United States region that are quite different from those produced by Frankel et al. [2002] because they used different input seismological parameters and weights (see Table 1). We disagree with Musson's conclusion that ``because a method may be applied badly on one occasion does not mean the method itself is bad.'' We do not say that the method is poor, but rather that those who use PSHA need to document their inputs and communicate them fully to the users. It seems that Musson is trying to create myth by suggesting his own methods should be used.

  10. A validated model to predict microalgae growth in outdoor pond cultures subjected to fluctuating light intensities and water temperatures

    DOE PAGES

    Huesemann, Michael H.; Crowe, Braden J.; Waller, Peter; ...

    2015-12-11

    Here, a microalgae biomass growth model was developed for screening novel strains for their potential to exhibit high biomass productivities under nutrient-replete conditions in outdoor ponds subjected to fluctuating light intensities and water temperatures. Growth is modeled by first estimating the light attenuation by biomass according to a scatter-corrected Beer-Lambert Law, and then calculating the specific growth rate in discretized culture volume slices that receive declining light intensities due to attenuation. The model requires the following experimentally determined strain-specific input parameters: specific growth rate as a function of light intensity and temperature, biomass loss rate in the dark as amore » function of temperature and average light intensity during the preceding light period, and the scatter-corrected biomass light absorption coefficient. The model was successful in predicting the growth performance and biomass productivity of three different microalgae species (Chlorella sorokiniana, Nannochloropsis salina, and Picochlorum sp.) in raceway pond cultures (batch and semi-continuous) subjected to diurnal sunlight intensity and water temperature variations. Model predictions were moderately sensitive to minor deviations in input parameters. To increase the predictive power of this and other microalgae biomass growth models, a better understanding of the effects of mixing-induced rapid light dark cycles on photo-inhibition and short-term biomass losses due to dark respiration in the aphotic zone of the pond is needed.« less

  11. A five-layer users' need hierarchy of computer input device selection: a contextual observation survey of computer users with cervical spinal injuries (CSI).

    PubMed

    Tsai, Tsai-Hsuan; Nash, Robert J; Tseng, Kevin C

    2009-05-01

    This article presents how the researcher goes about answering the research question, 'how assistive technology impacts computer use among individuals with cervical spinal cord injury?' through an in-depth investigation into the real-life situations among computer operators with cervical spinal cord injuries (CSI). An in-depth survey was carried out to provide an insight into the function abilities and limitation, habitual practice and preference, choices and utilisation of input devices, personal and/or technical assistance, environmental set-up and arrangements and special requirements among 20 experienced computer users with cervical spinal cord injuries. Following the survey findings, a five-layer CSI users' needs hierarchy of input device selection and use was proposed. These needs were ranked in order: beginning with the most basic criterion at the bottom of the pyramid; lower-level criteria must be met before one moves onto the higher level. The users' needs hierarchy for CSI computer users, which had not been applied by previous research work and which has established a rationale for the development of alternative input devices. If an input device achieves the criteria set up in the needs hierarchy, then a good match of person and technology will be achieved.

  12. Calibration under uncertainty for finite element models of masonry monuments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, andmore » there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.« less

  13. Prediction of La0.6Sr0.4Co0.2Fe0.8O3 cathode microstructures during sintering: Kinetic Monte Carlo (KMC) simulations calibrated by artificial neural networks

    NASA Astrophysics Data System (ADS)

    Yan, Zilin; Kim, Yongtae; Hara, Shotaro; Shikazono, Naoki

    2017-04-01

    The Potts Kinetic Monte Carlo (KMC) model, proven to be a robust tool to study all stages of sintering process, is an ideal tool to analyze the microstructure evolution of electrodes in solid oxide fuel cells (SOFCs). Due to the nature of this model, the input parameters of KMC simulations such as simulation temperatures and attempt frequencies are difficult to identify. We propose a rigorous and efficient approach to facilitate the input parameter calibration process using artificial neural networks (ANNs). The trained ANN reduces drastically the number of trial-and-error of KMC simulations. The KMC simulation using the calibrated input parameters predicts the microstructures of a La0.6Sr0.4Co0.2Fe0.8O3 cathode material during sintering, showing both qualitative and quantitative congruence with real 3D microstructures obtained by focused ion beam scanning electron microscopy (FIB-SEM) reconstruction.

  14. Real-Time Stability and Control Derivative Extraction From F-15 Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.

    2003-01-01

    A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.

  15. Astrobiological complexity with probabilistic cellular automata.

    PubMed

    Vukotić, Branislav; Ćirković, Milan M

    2012-08-01

    The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.

  16. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  17. A model of objective weighting for EIA.

    PubMed

    Ying, L G; Liu, Y C

    1995-06-01

    In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.

  18. Human sense utilization method on real-time computer graphics

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao

    1997-06-01

    We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.

  19. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  20. Software Computes Tape-Casting Parameters

    NASA Technical Reports Server (NTRS)

    deGroh, Henry C., III

    2003-01-01

    Tcast2 is a FORTRAN computer program that accelerates the setup of a process in which a slurry containing metal particles and a polymeric binder is cast, to a thickness regulated by a doctor blade, onto fibers wound on a rotating drum to make a green precursor of a metal-matrix/fiber composite tape. Before Tcast2, setup parameters were determined by trial and error in time-consuming multiple iterations of the process. In Tcast2, the fiber architecture in the final composite is expressed in terms of the lateral distance between fibers and the thickness-wise distance between fibers in adjacent plies. The lateral distance is controlled via the manner of winding. The interply spacing is controlled via the characteristics of the slurry and the doctor-blade height. When a new combination of fibers and slurry is first cast and dried to a green tape, the shrinkage from the wet to the green condition and a few other key parameters of the green tape are measured. These parameters are provided as input to Tcast2, which uses them to compute the doctor-blade height and fiber spacings needed to obtain the desired fiber architecture and fiber volume fraction in the final composite.

  1. Measurements for liquid rocket engine performance code verification

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Palko, Richard L.

    1986-01-01

    The goal of the rocket engine performance code verification tests is to obtain the I sub sp with an accuracy of 0.25% or less. This needs to be done during the sequence of four related tests (two reactive and two hot gas simulation) to best utilize the loss separation technique recommended in this study. In addition to I sub sp, the measurements of the input and output parameters for the codes are needed. This study has shown two things in regard to obtaining the I sub sp uncertainty within the 0.25% target. First, this target is generally not being realized at the present time, and second, the instrumentation and testing technology does exist to obtain this 0.25% uncertainty goal. However, to achieve this goal will require carefully planned, designed, and conducted testing. In addition, the test-stand (or system) dynamics must be evaluated in the pre-test and post-test phases of the design of the experiment and data analysis, respectively always keeping in mind that a .25% overall uncertainty in I sub sp is targeted. A table gives the maximum allowable uncertainty required for obtaining I sub sp with 0.25% uncertainty, the currently-quoted instrument specification, and present test uncertainty for the parameters. In general, it appears that measurement of the mass flow parameter within the required uncertainty may be the most difficult.

  2. Uncertainty in predictions of oil spill trajectories in a coastal zone

    NASA Astrophysics Data System (ADS)

    Sebastião, P.; Guedes Soares, C.

    2006-12-01

    A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.

  3. Development of Benchmark Examples for Quasi-Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  4. Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Kruger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.

  5. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  6. Development and Application of Benchmark Examples for Mode II Static Delamination Propagation and Fatigue Growth Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.

  7. RF safety assessment of a bilateral four-channel transmit/receive 7 Tesla breast coil: SAR versus tissue temperature limits.

    PubMed

    Fiedler, Thomas M; Ladd, Mark E; Bitz, Andreas K

    2017-01-01

    The purpose of this work was to perform an RF safety evaluation for a bilateral four-channel transmit/receive breast coil and to determine the maximum permissible input power for which RF exposure of the subject stays within recommended limits. The safety evaluation was done based on SAR as well as on temperature simulations. In comparison to SAR, temperature is more directly correlated with tissue damage, which allows a more precise safety assessment. The temperature simulations were performed by applying three different blood perfusion models as well as two different ambient temperatures. The goal was to evaluate whether the SAR and temperature distributions correlate inside the human body and whether SAR or temperature is more conservative with respect to the limits specified by the IEC. A simulation model was constructed including coil housing and MR environment. Lumped elements and feed networks were modeled by a network co-simulation. The model was validated by comparison of S-parameters and B 1 + maps obtained in an anatomical phantom. Three numerical body models were generated based on 3 Tesla MRI images to conform to the coil housing. SAR calculations were performed and the maximal permissible input power was calculated based on IEC guidelines. Temperature simulations were performed based on the Pennes bioheat equation with the power absorption from the RF simulations as heat source. The blood perfusion was modeled as constant to reflect impaired patients as well as with a linear and exponential temperature-dependent increase to reflect two possible models for healthy subjects. Two ambient temperatures were considered to account for cooling effects from the environment. The simulation model was validated with a mean deviation of 3% between measurement and simulation results. The highest 10 g-averaged SAR was found in lung and muscle tissue on the right side of the upper torso. The maximum permissible input power was calculated to be 17 W. The temperature simulations showed that temperature maximums do not correlate well with the position of the SAR maximums in all considered cases. The body models with an exponential blood perfusion increase did not exceed the temperature limit when an RF power according to the SAR limit was applied; in this case, a higher input power level by up to 73% would be allowed. The models with a constant or linear perfusion exceeded the limit for the local temperature when the local SAR limit was adhered to and would require a decrease in the input power level by up to 62%. The maximum permissible input power was determined based on SAR simulations with three newly generated body models and compared with results from temperature simulations. While SAR calculations are state-of-the-art and well defined as they are based on more or less well-known material parameters, temperature simulations depend strongly on additional material, environmental and physiological parameters. The simulations demonstrated that more consideration needs be made by the MR community in defining the parameters for temperature simulations in order to apply temperature limits instead of SAR limits in the context of MR RF safety evaluations. © 2016 American Association of Physicists in Medicine.

  8. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  9. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  10. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Extrapolation of sonic boom pressure signatures by the waveform parameter method

    NASA Technical Reports Server (NTRS)

    Thomas, C. L.

    1972-01-01

    The waveform parameter method of sonic boom extrapolation is derived and shown to be equivalent to the F-function method. A computer program based on the waveform parameter method is presented and discussed, with a sample case demonstrating program input and output.

  12. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  13. Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs

    PubMed Central

    McFarland, James M.; Cui, Yuwei; Butts, Daniel A.

    2013-01-01

    The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185

  14. JUPITER: Joint Universal Parameter IdenTification and Evaluation of Reliability - An Application Programming Interface (API) for Model Analysis

    USGS Publications Warehouse

    Banta, Edward R.; Poeter, Eileen P.; Doherty, John E.; Hill, Mary C.

    2006-01-01

    he Joint Universal Parameter IdenTification and Evaluation of Reliability Application Programming Interface (JUPITER API) improves the computer programming resources available to those developing applications (computer programs) for model analysis.The JUPITER API consists of eleven Fortran-90 modules that provide for encapsulation of data and operations on that data. Each module contains one or more entities: data, data types, subroutines, functions, and generic interfaces. The modules do not constitute computer programs themselves; instead, they are used to construct computer programs. Such computer programs are called applications of the API. The API provides common modeling operations for use by a variety of computer applications.The models being analyzed are referred to here as process models, and may, for example, represent the physics, chemistry, and(or) biology of a field or laboratory system. Process models commonly are constructed using published models such as MODFLOW (Harbaugh et al., 2000; Harbaugh, 2005), MT3DMS (Zheng and Wang, 1996), HSPF (Bicknell et al., 1997), PRMS (Leavesley and Stannard, 1995), and many others. The process model may be accessed by a JUPITER API application as an external program, or it may be implemented as a subroutine within a JUPITER API application . In either case, execution of the model takes place in a framework designed by the application programmer. This framework can be designed to take advantage of any parallel processing capabilities possessed by the process model, as well as the parallel-processing capabilities of the JUPITER API.Model analyses for which the JUPITER API could be useful include, for example: Compare model results to observed values to determine how well the model reproduces system processes and characteristics.Use sensitivity analysis to determine the information provided by observations to parameters and predictions of interest.Determine the additional data needed to improve selected model predictions.Use calibration methods to modify parameter values and other aspects of the model.Compare predictions to regulatory limits.Quantify the uncertainty of predictions based on the results of one or many simulations using inferential or Monte Carlo methods.Determine how to manage the system to achieve stated objectives.The capabilities provided by the JUPITER API include, for example, communication with process models, parallel computations, compressed storage of matrices, and flexible input capabilities. The input capabilities use input blocks suitable for lists or arrays of data. The input blocks needed for one application can be included within one data file or distributed among many files. Data exchange between different JUPITER API applications or between applications and other programs is supported by data-exchange files.The JUPITER API has already been used to construct a number of applications. Three simple example applications are presented in this report. More complicated applications include the universal inverse code UCODE_2005 (Poeter et al., 2005), the multi-model analysis MMA (Eileen P. Poeter, Mary C. Hill, E.R. Banta, S.W. Mehl, and Steen Christensen, written commun., 2006), and a code named OPR_PPR (Matthew J. Tonkin, Claire R. Tiedeman, Mary C. Hill, and D. Matthew Ely, written communication, 2006).This report describes a set of underlying organizational concepts and complete specifics about the JUPITER API. While understanding the organizational concept presented is useful to understanding the modules, other organizational concepts can be used in applications constructed using the JUPITER API.

  15. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    DOT National Transportation Integrated Search

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  16. Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene

    2003-01-01

    A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.

  17. New generation of hydraulic pedotransfer functions for Europe

    PubMed Central

    Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G

    2015-01-01

    A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465

  18. Combining control input with flight path data to evaluate pilot performance in transport aircraft.

    PubMed

    Ebbatson, Matt; Harris, Don; Huddlestone, John; Sears, Rodney

    2008-11-01

    When deriving an objective assessment of piloting performance from flight data records, it is common to employ metrics which purely evaluate errors in flight path parameters. The adequacy of pilot performance is evaluated from the flight path of the aircraft. However, in large jet transport aircraft these measures may be insensitive and require supplementing with frequency-based measures of control input parameters. Flight path and control input data were collected from pilots undertaking a jet transport aircraft conversion course during a series of symmetric and asymmetric approaches in a flight simulator. The flight path data were analyzed for deviations around the optimum flight path while flying an instrument landing approach. Manipulation of the flight controls was subject to analysis using a series of power spectral density measures. The flight path metrics showed no significant differences in performance between the symmetric and asymmetric approaches. However, control input frequency domain measures revealed that the pilots employed highly different control strategies in the pitch and yaw axes. The results demonstrate that to evaluate pilot performance fully in large aircraft, it is necessary to employ performance metrics targeted at both the outer control loop (flight path) and the inner control loop (flight control) parameters in parallel, evaluating both the product and process of a pilot's performance.

  19. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    USDA-ARS?s Scientific Manuscript database

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  20. TIM Version 3.0 beta Technical Description and User Guide - Appendix B - Example input file for TIMv3.0

    EPA Pesticide Factsheets

    Terrestrial Investigation Model, TIM, has several appendices to its user guide. This is the appendix that includes an example input file in its preserved format. Both parameters and comments defining them are included.

  1. Organic and low input farming: Pros and cons for soil health

    USDA-ARS?s Scientific Manuscript database

    Organic and low input farming practices have both advantages and disadvantages in building soil health and maintaining productivity. Examining the effects of farming practices on soil health parameters can aid in developing whole system strategies that promote sustainability. Application of specific...

  2. A reduced adaptive observer for multivariable systems. [using reduced dynamic ordering

    NASA Technical Reports Server (NTRS)

    Carroll, R. L.; Lindorff, D. P.

    1973-01-01

    An adaptive observer for multivariable systems is presented for which the dynamic order of the observer is reduced, subject to mild restrictions. The observer structure depends directly upon the multivariable structure of the system rather than a transformation to a single-output system. The number of adaptive gains is at most the sum of the order of the system and the number of input parameters being adapted. Moreover, for the relatively frequent specific cases for which the number of required adaptive gains is less than the sum of system order and input parameters, the number of these gains is easily determined by inspection of the system structure. This adaptive observer possesses all the properties ascribed to the single-input single-output adpative observer. Like the other adaptive observers some restriction is required of the allowable system command input to guarantee convergence of the adaptive algorithm, but the restriction is more lenient than that required by the full-order multivariable observer. This reduced observer is not restricted to cycle systems.

  3. Higher Plants in life support systems: design of a model and plant experimental compartment

    NASA Astrophysics Data System (ADS)

    Hezard, Pauline; Farges, Berangere; Sasidharan L, Swathy; Dussap, Claude-Gilles

    The development of closed ecological life support systems (CELSS) requires full control and efficient engineering for fulfilling the common objectives of water and oxygen regeneration, CO2 elimination and food production. Most of the proposed CELSS contain higher plants, for which a growth chamber and a control system are needed. Inside the compartment the development of higher plants must be understood and modeled in order to be able to design and control the compartment as a function of operating variables. The plant behavior must be analyzed at different sub-process scales : (i) architecture and morphology describe the plant shape and lead to calculate the morphological parameters (leaf area, stem length, number of meristems. . . ) characteristic of life cycle stages; (ii) physiology and metabolism of the different organs permit to assess the plant composition depending on the plant input and output rates (oxygen, carbon dioxide, water and nutrients); (iii) finally, the physical processes are light interception, gas exchange, sap conduction and root uptake: they control the available energy from photosynthesis and the input and output rates. These three different sub-processes are modeled as a system of equations using environmental and plant parameters such as light intensity, temperature, pressure, humidity, CO2 and oxygen partial pressures, nutrient solution composition, total leaf surface and leaf area index, chlorophyll content, stomatal conductance, water potential, organ biomass distribution and composition, etc. The most challenging issue is to develop a comprehensive and operative mathematical model that assembles these different sub-processes in a unique framework. In order to assess the parameters for testing a model, a polyvalent growth chamber is necessary. It should permit a controlled environment in order to test and understand the physiological response and determine the control strategy. The final aim of this model is to have an envi-ronmental control of plant behavior: this requires an extended knowledge of plant response to environment variations. This needs a large number of experiments, which would be easier to perform in a high-throughput system.

  4. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  5. META II Complex Systems Design and Analysis (CODA)

    DTIC Science & Technology

    2011-08-01

    37  3.8.7  Variables, Parameters and Constraints ............................................................. 37  3.8.8  Objective...18  Figure 7: Inputs, States, Outputs and Parameters of System Requirements Specifications ......... 19...Design Rule Based on Device Parameter ....................................................... 57  Figure 35: AEE Device Design Rules (excerpt

  6. Critical research issues in development of biomathematical models of fatigue and performance.

    PubMed

    Dinges, David F

    2004-03-01

    This article reviews the scientific research needed to ensure the continued development, validation, and operational transition of biomathematical models of fatigue and performance. These models originated from the need to ascertain the formal underlying relationships among sleep and circadian dynamics in the control of alertness and neurobehavioral performance capability. Priority should be given to research that further establishes their basic validity, including the accuracy of the core mathematical formulae and parameters that instantiate the interactions of sleep/wake and circadian processes. Since individuals can differ markedly and reliably in their responses to sleep loss and to countermeasures for it, models must incorporate estimates of these inter-individual differences, and research should identify predictors of them. To ensure models accurately predict recovery of function with sleep of varying durations, dose-response curves for recovery of performance as a function of prior sleep homeostatic load and the number of days of recovery are needed. It is also necessary to establish whether the accuracy of models is affected by using work/rest schedules as surrogates for sleep/wake inputs to models. Given the importance of light as both a circadian entraining agent and an alerting agent, research should determine the extent to which light input could incrementally improve model predictions of performance, especially in persons exposed to night work, jet lag, and prolonged work. Models seek to estimate behavioral capability and/or the relative risk of adverse events in a fatigued state. Research is needed on how best to scale and interpret metrics of behavioral capability, and incorporate factors that amplify or diminish the relationship between model predictions of performance and risk outcomes.

  7. Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual

    NASA Technical Reports Server (NTRS)

    Hamil, R. G.; Ferden, S. L.

    1977-01-01

    The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.

  8. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  9. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Mark A.; Bigelow, Matthew; Gilkey, Jeff C.

    The Super Strypi SWIL is a six degree-of-freedom (6DOF) simulation for the Super Strypi Launch Vehicle that includes a subset of the Super Strypi NGC software (guidance, ACS and sequencer). Aerodynamic and propulsive forces, mass properties, ACS (attitude control system) parameters, guidance parameters and Monte-Carlo parameters are defined in input files. Output parameters are saved to a Matlab mat file.

  11. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  12. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Stagich, B.

    Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  13. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    NASA Astrophysics Data System (ADS)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  14. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  15. Transformation of Galilean satellite parameters to J2000

    NASA Astrophysics Data System (ADS)

    Lieske, J. H.

    1998-09-01

    The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.

  16. Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel

    CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less

  17. Thermomechanical conditions and stresses on the friction stir welding tool

    NASA Astrophysics Data System (ADS)

    Atthipalli, Gowtam

    Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.

  18. Modeling Vegetation Growth Impact on Groundwater Recharge

    NASA Astrophysics Data System (ADS)

    Anurag, H.; Ng, G. H. C.; Tipping, R.

    2017-12-01

    Vegetation growth is affected by variability in climate and land-cover / land-use over a range of temporal and spatial scales. Vegetation also modifies water budget through interception and evapotranspiration and thus has a significant impact on groundwater recharge. Most groundwater recharge assessments represent vegetation using specified, static parameter, such as for leaf-area-index, but this neglects the effect of vegetation dynamics on recharge estimates. Our study addresses this gap by including vegetation growth in model simulations of recharge. We use NCAR's Community Land Model v4.5 with its BGC module (BGC is the new CLM4.5 biogeochemistry). It integrates prognostic vegetation growth with land-surface and subsurface hydrological processes and can thus capture the effect of vegetation on groundwater. A challenge, however, is the need to resolve uncertainties in model inputs ranging from vegetation growth parameters all the way down to the water table. We have compiled diverse data spanning meteorological inputs to subsurface geology and use these to implement ensemble model simulations to evaluate the possible effects of dynamic vegetation growth (versus specified, static vegetation parameterizations) on estimating groundwater recharge. We present preliminary results for select data-intensive test locations throughout the state of Minnesota (USA), which has a sharp east-west precipitation gradient that makes it an apt testbed for examining ecohydrologic relationships across different temperate climatic settings and ecosystems. Using the ensemble simulations, we examine the effect of seasonal to interannual variability of vegetation growth on recharge and water table depths, which has implications for predicting the combined impact of climate, vegetation, and geology on groundwater resources. Future work will include distributed model simulations over the entire state, as well as conditioning uncertain vegetation and subsurface parameters on remote sensing data and statewide water table records using data assimilation.

  19. Machining process influence on the chip form and surface roughness by neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Anicic, Obrad; Jović, Srđan; Aksić, Danilo; Skulić, Aleksandar; Nedić, Bogdan

    2017-04-01

    The main aim of the study was to analyze the influence of six machining parameters on the chip shape formation and surface roughness as well during turning of Steel 30CrNiMo8. Three components of cutting forces were used as inputs together with cutting speed, feed rate, and depth of cut. It is crucial for the engineers to use optimal machining parameters to get the best results or to high control of the machining process. Therefore, there is need to find the machining parameters for the optimal procedure of the machining process. Adaptive neuro-fuzzy inference system (ANFIS) was used to estimate the inputs influence on the chip shape formation and surface roughness. According to the results, the cutting force in direction of the depth of cut has the highest influence on the chip form. The testing error for the cutting force in direction of the depth of cut has testing error 0.2562. This cutting force determines the depth of cut. According to the results, the depth of cut has the highest influence on the surface roughness. Also the depth of cut has the highest influence on the surface roughness. The testing error for the cutting force in direction of the depth of cut has testing error 5.2753. Generally the depth of cut and the cutting force which provides the depth of cut are the most dominant factors for chip forms and surface roughness. Any small changes in depth of cut or in cutting force which provide the depth of cut could drastically affect the chip form or surface roughness of the working material.

  20. Between-User Reliability of Tier 1 Exposure Assessment Tools Used Under REACH.

    PubMed

    Lamb, Judith; Galea, Karen S; Miller, Brian G; Hesse, Susanne; Van Tongeren, Martie

    2017-10-01

    When applying simple screening (Tier 1) tools to estimate exposure to chemicals in a given exposure situation under the Registration, Evaluation, Authorisation and restriction of CHemicals Regulation 2006 (REACH), users must select from several possible input parameters. Previous studies have suggested that results from exposure assessments using expert judgement and from the use of modelling tools can vary considerably between assessors. This study aimed to investigate the between-user reliability of Tier 1 tools. A remote-completion exercise and in person workshop were used to identify and evaluate tool parameters and factors such as user demographics that may be potentially associated with between-user variability. Participants (N = 146) generated dermal and inhalation exposure estimates (N = 4066) from specified workplace descriptions ('exposure situations') and Tier 1 tool combinations (N = 20). Interactions between users, tools, and situations were investigated and described. Systematic variation associated with individual users was minor compared with random between-user variation. Although variation was observed between choices made for the majority of input parameters, differing choices of Process Category ('PROC') code/activity descriptor and dustiness level impacted most on the resultant exposure estimates. Exposure estimates ranging over several orders of magnitude were generated for the same exposure situation by different tool users. Such unpredictable between-user variation will reduce consistency within REACH processes and could result in under-estimation or overestimation of exposure, risking worker ill-health or the implementation of unnecessary risk controls, respectively. Implementation of additional support and quality control systems for all tool users is needed to reduce between-assessor variation and so ensure both the protection of worker health and avoidance of unnecessary business risk management expenditure. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. Assessing the welfare of laboratory mice in their home environment using animal-based measures--a benchmarking tool.

    PubMed

    Spangenberg, Elin M F; Keeling, Linda J

    2016-02-01

    Welfare problems in laboratory mice can be a consequence of an ongoing experiment, or a characteristic of a particular genetic line, but in some cases, such as breeding animals, they are most likely to be a result of the design and management of the home cage. Assessment of the home cage environment is commonly performed using resource-based measures, like access to nesting material. However, animal-based measures (related to the health status and behaviour of the animals) can be used to assess the current welfare of animals regardless of the inputs applied (i.e. the resources or management). The aim of this study was to design a protocol for assessing the welfare of laboratory mice using only animal-based measures. The protocol, to be used as a benchmarking tool, assesses mouse welfare in the home cage and does not contain parameters related to experimental situations. It is based on parameters corresponding to the 12 welfare criteria established by the Welfare Quality® project. Selection of animal-based measures was performed by scanning existing published, web-based and informal protocols, and by choosing parameters that matched these criteria, were feasible in practice and, if possible, were already validated indicators of mouse welfare. The parameters should identify possible animal welfare problems and enable assessment directly in an animal room during cage cleaning procedures, without the need for extra equipment. Thermal comfort behaviours and positive emotional states are areas where more research is needed to find valid, reliable and feasible animal-based measures. © The Author(s) 2015.

  2. Computer program for analysis of high speed, single row, angular contact, spherical roller bearing, SASHBEAN. Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Aggarwal, Arun K.

    1993-01-01

    The computer program SASHBEAN (Sikorsky Aircraft Spherical Roller High Speed Bearing Analysis) analyzes and predicts the operating characteristics of a Single Row, Angular Contact, Spherical Roller Bearing (SRACSRB). The program runs on an IBM or IBM compatible personal computer, and for a given set of input data analyzes the bearing design for it's ring deflections (axial and radial), roller deflections, contact areas and stresses, induced axial thrust, rolling element and cage rotation speeds, lubrication parameters, fatigue lives, and amount of heat generated in the bearing. The dynamic loading of rollers due to centrifugal forces and gyroscopic moments, which becomes quite significant at high speeds, is fully considered in this analysis. For a known application and it's parameters, the program is also capable of performing steady-state and time-transient thermal analyses of the bearing system. The steady-state analysis capability allows the user to estimate the expected steady-state temperature map in and around the bearing under normal operating conditions. On the other hand, the transient analysis feature provides the user a means to simulate the 'lost lubricant' condition and predict a time-temperature history of various critical points in the system. The bearing's 'time-to-failure' estimate may also be made from this (transient) analysis by considering the bearing as failed when a certain temperature limit is reached in the bearing components. The program is fully interactive and allows the user to get started and access most of its features with a minimal of training. For the most part, the program is menu driven, and adequate help messages were provided to guide a new user through various menu options and data input screens. All input data, both for mechanical and thermal analyses, are read through graphical input screens, thereby eliminating any need of a separate text editor/word processor to edit/create data files. Provision is also available to select and view the contents of output files on the monitor screen if no paper printouts are required. A separate volume (Volume-2) of this documentation describes, in detail, the underlying mathematical formulations, assumptions, and solution algorithms of this program.

  3. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  4. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    NASA Astrophysics Data System (ADS)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  5. Biofloc technology application in aquaculture to support sustainable development goals.

    PubMed

    Bossier, Peter; Ekasari, Julie

    2017-09-01

    Biofloc technology (BFT) application offers benefits in improving aquaculture production that could contribute to the achievement of sustainable development goals. This technology could result in higher productivity with less impact to the environment. Furthermore, biofloc systems may be developed and performed in integration with other food production, thus promoting productive integrated systems, aiming at producing more food and feed from the same area of land with fewer input. The biofloc technology is still in its infant stage. A lot more research is needed to optimise the system (in relation to operational parameters) e.g. in relation to nutrient recycling, MAMP production, immunological effects. In addition research findings will need to be communicated to farmers as the implementation of biofloc technology will require upgrading their skills. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. Force-directed visualization for conceptual data models

    NASA Astrophysics Data System (ADS)

    Battigaglia, Andrew; Sutter, Noah

    2017-03-01

    Conceptual data models are increasingly stored in an eXtensible Markup Language (XML) format because of its portability between different systems and the ability of databases to use this format for storing data. However, when attempting to capture business or design needs, an organized graphical format is preferred in order to facilitate communication to receive as much input as possible from users and subject-matter experts. Existing methods of achieving this conversion suffer from problems of not being specific enough to capture all of the needs of conceptual data modeling and not being able to handle a large number of relationships between entities. This paper describes an implementation for a modeling solution to clearly illustrate conceptual data models stored in XML formats in well organized and structured diagrams. A force layout with several different parameters is applied to the diagram to create both compact and easily traversable relationships between entities.

  7. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  8. Processing Oscillatory Signals by Incoherent Feedforward Loops

    PubMed Central

    Zhang, Carolyn; You, Lingchong

    2016-01-01

    From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175

  9. Ecophysiological parameters for Pacific Northwest trees.

    Treesearch

    Amy E. Hessl; Cristina Milesi; Michael A. White; David L. Peterson; Robert E. Keane

    2004-01-01

    We developed a species- and location-specific database of published ecophysiological variables typically used as input parameters for biogeochemical models of coniferous and deciduous forested ecosystems in the Western United States. Parameters are based on the requirements of Biome-BGC, a widely used biogeochemical model that was originally parameterized for the...

  10. EVALUATING THE SENSITIVITY OF A SUBSURFACE MULTICOMPONENT REACTIVE TRANSPORT MODEL WITH RESPECT TO TRANSPORT AND REACTION PARAMETERS

    EPA Science Inventory

    The input variables for a numerical model of reactive solute transport in groundwater include both transport parameters, such as hydraulic conductivity and infiltration, and reaction parameters that describe the important chemical and biological processes in the system. These pa...

  11. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  12. A tuning algorithm for model predictive controllers based on genetic algorithms and fuzzy decision making.

    PubMed

    van der Lee, J H; Svrcek, W Y; Young, B R

    2008-01-01

    Model Predictive Control is a valuable tool for the process control engineer in a wide variety of applications. Because of this the structure of an MPC can vary dramatically from application to application. There have been a number of works dedicated to MPC tuning for specific cases. Since MPCs can differ significantly, this means that these tuning methods become inapplicable and a trial and error tuning approach must be used. This can be quite time consuming and can result in non-optimum tuning. In an attempt to resolve this, a generalized automated tuning algorithm for MPCs was developed. This approach is numerically based and combines a genetic algorithm with multi-objective fuzzy decision-making. The key advantages to this approach are that genetic algorithms are not problem specific and only need to be adapted to account for the number and ranges of tuning parameters for a given MPC. As well, multi-objective fuzzy decision-making can handle qualitative statements of what optimum control is, in addition to being able to use multiple inputs to determine tuning parameters that best match the desired results. This is particularly useful for multi-input, multi-output (MIMO) cases where the definition of "optimum" control is subject to the opinion of the control engineer tuning the system. A case study will be presented in order to illustrate the use of the tuning algorithm. This will include how different definitions of "optimum" control can arise, and how they are accounted for in the multi-objective decision making algorithm. The resulting tuning parameters from each of the definition sets will be compared, and in doing so show that the tuning parameters vary in order to meet each definition of optimum control, thus showing the generalized automated tuning algorithm approach for tuning MPCs is feasible.

  13. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  14. An FDTD-based computer simulation platform for shock wave propagation in electrohydraulic lithotripsy.

    PubMed

    Yılmaz, Bülent; Çiftçi, Emre

    2013-06-01

    Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE

    NASA Technical Reports Server (NTRS)

    Dougherty, F. C.

    1994-01-01

    The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.

  16. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  17. Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances Using the Gmm Classifier

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela

    2014-01-01

    The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.

  18. DC servomechanism parameter identification: a Closed Loop Input Error approach.

    PubMed

    Garrido, Ruben; Miranda, Roger

    2012-01-01

    This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  20. Impact of Rainfall Data Source on Hydrologic and Water Quality Model Response of a Coastal Plain Watershed

    USDA-ARS?s Scientific Manuscript database

    Hydrologic and water quality models are very sensitive to input parameter values, especially precipitation input data. With several different sources of precipitation data now available, it is quite difficult to determine which source is most appropriate under various circumstances. We used several ...

Top