Sample records for reference input parameter

  1. System and method for motor parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less

  2. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.

  3. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Oblozinsky, P.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  4. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capote, R.; Herman, M.; Capote,R.

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less

  5. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  6. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  7. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  8. VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)

    NASA Astrophysics Data System (ADS)

    Garcia Munoz, A.; Mills, F. P.

    2014-08-01

    Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).

  9. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  10. Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy

    USDA-ARS?s Scientific Manuscript database

    A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...

  11. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  12. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  13. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  14. Estimating A Reference Standard Segmentation With Spatially Varying Performance Parameters: Local MAP STAPLE

    PubMed Central

    Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.

    2012-01-01

    We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727

  15. User's Guide for the Agricultural Non-Point Source (AGNPS) Pollution Model Data Generator

    USGS Publications Warehouse

    Finn, Michael P.; Scheidt, Douglas J.; Jaromack, Gregory M.

    2003-01-01

    BACKGROUND Throughout this user guide, we refer to datasets that we used in conjunction with developing of this software for supporting cartographic research and producing the datasets to conduct research. However, this software can be used with these datasets or with more 'generic' versions of data of the appropriate type. For example, throughout the guide, we refer to national land cover data (NLCD) and digital elevation model (DEM) data from the U.S. Geological Survey (USGS) at a 30-m resolution, but any digital terrain model or land cover data at any appropriate resolution will produce results. Another key point to keep in mind is to use a consistent data resolution for all the datasets per model run. The U.S. Department of Agriculture (USDA) developed the Agricultural Nonpoint Source (AGNPS) pollution model of watershed hydrology in response to the complex problem of managing nonpoint sources of pollution. AGNPS simulates the behavior of runoff, sediment, and nutrient transport from watersheds that have agriculture as their prime use. The model operates on a cell basis and is a distributed parameter, event-based model. The model requires 22 input parameters. Output parameters are grouped primarily by hydrology, sediment, and chemical output (Young and others, 1995.) Elevation, land cover, and soil are the base data from which to extract the 22 input parameters required by the AGNPS. For automatic parameter extraction, follow the general process described in this guide of extraction from the geospatial data through the AGNPS Data Generator to generate input parameters required by the pollution model (Finn and others, 2002.)

  16. Estimating historical atmospheric mercury concentrations from silver mining and their legacies in present-day surface soil in Potosí, Bolivia

    NASA Astrophysics Data System (ADS)

    Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John

    2011-12-01

    Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.

  17. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  18. Reference equations of motion for automatic rendezvous and capture

    NASA Technical Reports Server (NTRS)

    Henderson, David M.

    1992-01-01

    The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.

  19. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  20. Short- and longtime stability of therapeutic ultrasound reference sources for dosimetry and exposimetry purposes

    NASA Astrophysics Data System (ADS)

    Haller, J.; Wilkens, V.

    2017-03-01

    The objective of this work was to create highly stable therapeutic ultrasound fields with well-known exposimetry and dosimetry parameters that are reproducible and hence predictable with well-known uncertainties. Such well- known and reproducible fields would allow validation and secondary calibrations of different measuring capabilities, which is already a widely accepted strategy for diagnostic fields. For this purpose, a reference setup was established that comprises two therapeutic ultrasound sources (one High-Intensity Therapeutic Ultrasound (HITU) source and one physiotherapy-like source), standard rf electronics for signal creation, and computer-controlled feedback to stabilize the input voltage. The short- and longtime stability of the acoustic output were evaluated - for the former, measurements over typical laboratory measurement time periods (i.e. some seconds or minutes) of the input voltage stability with and without feedback control were performed. For the latter, measurements of typical acoustical exposimetry parameters were performed bimonthly over one year. The measurement results show that the short- and the longtime stability of the reference setup are very good and that it is especially significantly improved in comparison to a setup without any feedback control.

  1. Design of Robust Controllers for a Multiple Input-Multiple Output Control System with Uncertain Parameters Application to the Lateral and Longitudinal Modes of the KC-135 Transport Aircraft

    DTIC Science & Technology

    1984-12-01

    input/output relationship. These are obtained from the design specifications (10:68i-684). Note that the first digit of the subscript of bkj refers...to the output and the second digit to the input. Thus, bkj is.a function of the response requirements on the output, Yk’ due to the input, r.. 169 . A...NXPMAX pNYPMAX, IPLOT) C C C* LIBARY OF PLOT SUBR(OUTINES PSNTCT NLIEPRINTER ONLY~ C* C C C SUP’ LPLOTS C C C DIMENSION IXY(101,71)918UF(100) COMMON /HOPY

  2. Simulation verification techniques study. Task report 4: Simulation module performance parameters and performance standards

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Shuttle simulation software modules in the environment, crew station, vehicle configuration and vehicle dynamics categories are discussed. For each software module covered, a description of the module functions and operational modes, its interfaces with other modules, its stored data, inputs, performance parameters and critical performance parameters is given. Reference data sources which provide standards of performance are identified for each module. Performance verification methods are also discussed briefly.

  3. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    NASA Astrophysics Data System (ADS)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for a hydrogen-oxygen system are relatively simple, thereby resulting in low thermodynamic reference value uncertainties. Hydrocarbon combustors, solid rocket motors and hybrid rocket motors have combustion gases containing complex molecules that will likely have thermodynamic reference values with large uncertainties. Thus, every chemical system should be analyzed in a similar manner as that shown in this work.

  4. Cryptographic Boolean Functions with Biased Inputs

    DTIC Science & Technology

    2015-07-31

    theory of random graphs developed by Erdős and Rényi [2]. The graph properties in a random graph expressed as such Boolean functions are used by...distributed Bernoulli variates with the parameter p. Since our scope is within the area of cryptography , we initiate an analysis of cryptographic...Boolean functions with biased inputs, which we refer to as µp-Boolean functions, is a common generalization of Boolean functions which stems from the

  5. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  6. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    PubMed

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.

  7. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  8. Transformation of Galilean satellite parameters to J2000

    NASA Astrophysics Data System (ADS)

    Lieske, J. H.

    1998-09-01

    The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.

  9. A strategy to complete databases with parameters of refined line shapes and its test for CO in He, Ar and Kr

    NASA Astrophysics Data System (ADS)

    Ngo, N. H.; Hartmann, J.-M.

    2017-12-01

    We propose a strategy to generate parameters of the Hartmann-Tran profile (HTp) by simultaneously using first principle calculations and broadening coefficients deduced from Voigt/Lorentz fits of experimental spectra. We start from reference absorptions simulated, at pressures between 10 and 950 Torr, using the HTp with parameters recently obtained from high quality experiments for the P(1) and P(17) lines of the 3-0 band of CO in He, Ar and Kr. Using requantized Classical Molecular Dynamics Simulations (rCMDS), we calculate spectra under the same conditions. We then correct them using a single parameter deduced from Lorentzian fits of both reference and calculated absorptions at a single pressure. The corrected rCMDS spectra are then simultaneously fitted using the HTp, yielding the parameters of this model and associated spectra. Comparisons between the retrieved and input (reference) HTp parameters show a quite satisfactory agreement. Furthermore, differences between the reference spectra and those computed with the HT model fitted to the corrected-rCMDS predictions are much smaller than those obtained with a Voigt line shape. Their full amplitudes are in most cases smaller than 1%, and often below 0.5%, of the peak absorption. This opens the route to completing spectroscopic databases using calculations and the very numerous broadening coefficients available from Voigt fits of laboratory spectra.

  10. Subcritical flutter testing and system identification

    NASA Technical Reports Server (NTRS)

    Houbolt, J. C.

    1974-01-01

    Treatment is given of system response evaluation, especially in application to subcritical flight and wind tunnel flutter testing of aircraft. An evaluation is made of various existing techniques, in conjuction with a companion survey which reports theoretical and analog experiments made to study the identification of system response characteristics. Various input excitations are considered, and new techniques for analyzing response are explored, particularly in reference to the prevalent practical case where unwanted input noise is present, such as caused by gusts or wind tunnel turbulence. Further developments are also made of system parameter identification techniques.

  11. Empirical models for fitting of oral concentration time curves with and without an intravenous reference.

    PubMed

    Weiss, Michael

    2017-06-01

    Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.

  12. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  13. BIREFRINGENT FILTER MODEL

    NASA Technical Reports Server (NTRS)

    Cross, P. L.

    1994-01-01

    Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.

  14. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  15. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  16. Observer-Based Adaptive NN Control for a Class of Uncertain Nonlinear Systems With Nonsymmetric Input Saturation.

    PubMed

    Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang

    2017-07-01

    This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.

  17. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  18. Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, Daniel I.

    The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculationsmore » and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, k s value, and the cementitious leachate impact factor.« less

  19. Performance Optimizing Adaptive Control with Time-Varying Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The control synthesis involves the design of a performance optimizing adaptive controller from a subset of control inputs. The resulting effect of the performance optimizing adaptive controller is to modify the initial reference model into a time-varying reference model which satisfies the performance optimization requirement obtained from an optimal control problem. The time-varying reference model modification is accomplished by the real-time solutions of the time-varying Riccati and Sylvester equations coupled with the least-squares parameter estimation of the sensitivities of the performance metric. The effectiveness of the proposed method is demonstrated by an application of maneuver load alleviation control for a flexible aircraft.

  20. Digital adaptive controllers for VTOL vehicles. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.; Pratt, S. G.

    1979-01-01

    The VTOL approach and landing test (VALT) adaptive software is documented. Two self-adaptive algorithms, one based on an implicit model reference design and the other on an explicit parameter estimation technique were evaluated. The organization of the software, user options, and a nominal set of input data are presented along with a flow chart and program listing of each algorithm.

  1. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    DOE PAGES

    Palmiotti, Giuseppe; Salvatores, Massimo

    2012-01-01

    The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  2. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  3. Novel adaptive neural control design for a constrained flexible air-breathing hypersonic vehicle based on actuator compensation

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; He, Guangjun; Huang, Jiaqi

    2016-03-01

    This paper investigates the design of a novel adaptive neural controller for the longitudinal dynamics of a flexible air-breathing hypersonic vehicle with control input constraints. To reduce the complexity of controller design, the vehicle dynamics is decomposed into the velocity subsystem and the altitude subsystem, respectively. For each subsystem, only one neural network is utilized to approach the lumped unknown function. By employing a minimal-learning parameter method to estimate the norm of ideal weight vectors rather than their elements, there are only two adaptive parameters required for neural approximation. Thus, the computational burden is lower than the ones derived from neural back-stepping schemes. Specially, to deal with the control input constraints, additional systems are exploited to compensate the actuators. Lyapunov synthesis proves that all the closed-loop signals involved are uniformly ultimately bounded. Finally, simulation results show that the adopted compensation scheme can tackle actuator constraint effectively and moreover velocity and altitude can stably track their reference trajectories even when the physical limitations on control inputs are in effect.

  4. Recent advances in parametric neuroreceptor mapping with dynamic PET: basic concepts and graphical analyses.

    PubMed

    Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung

    2014-10-01

    Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.

  5. STS-9 BET products

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Kelly, G. M.; Heck, M. L.; Mcconnell, J. G.; Henry, M. W.

    1984-01-01

    The final products generated for the STS-9, which landed on December 8, 1983 are reported. The trajectory reconstruction utilized an anchor epoch of GMT corresponding to an initial altitude of h 356 kft, selected in view of the limited tracking coverage available. The final state utilized IMU2 measurements and was based on processing radar tracking from six C-bands and a single S-band station, plus six photo-theodolite cameras in the vicinity of Runway 17 at Edwards Air Force Base. The final atmosphere (FLAIR9/UN=581199C) was based on a composite of the remote measured data and the 1978 Air Force Reference Atmosphere model. The Extended BET is available as STS9BET/UN=274885C. The AEROBET and MMLE input files created are discussed. Plots of the more relevant parameters from the AEROBET (reel number NL0624) are included. Input parameters, final residual plots, a trajectory listing, and data archival information are defined.

  6. Conditional parametric models for storm sewer runoff

    NASA Astrophysics Data System (ADS)

    Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.

    2007-05-01

    The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.

  7. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  8. A Test Facility for the Calibration of Pressure and Acceleration Transducers by a Continuous Sweep Method.

    DTIC Science & Technology

    1976-03-01

    350Pa and 35MPa (0.05 lb/sqin and 5000 lb/sqin) and accelerometers with range maxima between 1.0g sub n and 100g sub n . Both types of transducer are...calibrated by subjecting them and an accurate reference transducer to a continuous sweep of input parameter. Graphs are drawn by an X- Y recorder of

  9. Enhancing Access to Drought Information Using the CUAHSI Hydrologic Information System

    NASA Astrophysics Data System (ADS)

    Schreuders, K. A.; Tarboton, D. G.; Horsburgh, J. S.; Sen Gupta, A.; Reeder, S.

    2011-12-01

    The National Drought Information System (NIDIS) Upper Colorado River Basin pilot study is investigating and establishing capabilities for better dissemination of drought information for early warning and management. As part of this study we are using and extending functionality from the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) to provide better access to drought-related data in the Upper Colorado River Basin. The CUAHSI HIS is a federated system for sharing hydrologic data. It is comprised of multiple data servers, referred to as HydroServers, that publish data in a standard XML format called Water Markup Language (WaterML), using web services referred to as WaterOneFlow web services. HydroServers can also publish geospatial data using Open Geospatial Consortium (OGC) web map, feature and coverage services and are capable of hosting web and map applications that combine geospatial datasets with observational data served via web services. HIS also includes a centralized metadata catalog that indexes data from registered HydroServers and a data access client referred to as HydroDesktop. For NIDIS, we have established a HydroServer to publish drought index values as well as the input data used in drought index calculations. Primary input data required for drought index calculation include streamflow, precipitation, reservoir storages, snow water equivalent, and soil moisture. We have developed procedures to redistribute the input data to the time and space scales chosen for drought index calculation, namely half monthly time intervals for HUC 10 subwatersheds. The spatial redistribution approaches used for each input parameter are dependent on the spatial linkages for that parameter, i.e., the redistribution procedure for streamflow is dependent on the upstream/downstream connectivity of the stream network, and the precipitation redistribution procedure is dependent on elevation to account for orographic effects. A set of drought indices are then calculated from the redistributed data. We have created automated data and metadata harvesters that periodically scan and harvest new data from each of the input databases, and calculates extensions to the resulting derived data sets, ensuring that the data available on the drought server is kept up to date. This paper will describe this system, showing how it facilitates the integration of data from multiple sources to inform the planning and management of water resources during drought. The system may be accessed at http://drought.usu.edu.

  10. Venus Global Reference Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.

    2017-01-01

    Venus Global Reference Atmospheric Model (Venus-GRAM) is an engineering-level atmospheric model developed by MSFC that is widely used for diverse mission applications including: Systems design; Performance analysis; Operations planning for aerobraking, Entry, Descent and Landing, and aerocapture; Is not a forecast model; Outputs include density, temperature, pressure, wind components, and chemical composition; Provides dispersions of thermodynamic parameters, winds, and density; Optional trajectory and auxiliary profile input files Has been used in multiple studies and proposals including NASA Engineering and Safety Center (NESC) Autonomous Aerobraking and various Discovery proposals; Released in 2005; Available at: https://software.nasa.gov/software/MFS-32314-1.

  11. Research of misalignment between dithered ring laser gyro angle rate input axis and dither axis

    NASA Astrophysics Data System (ADS)

    Li, Geng; Wu, Wenqi; FAN, Zhenfang; LU, Guangfeng; Hu, Shaomin; Luo, Hui; Long, Xingwu

    2014-12-01

    The strap-down inertial navigation system (SINS), especially the SINS composed by dithered ring laser gyroscope (DRLG) is a kind of equipment, which providing high reliability and performance for moving vehicles. However, the mechanical dither which is used to eliminate the "Lock-In" effect can cause vibration disturbance to the INS and lead to dithering coupling problem in the inertial measurement unit (IMU) gyroscope triad, so its further application is limited. Among DRLG errors between the true gyro rotation rate and the measured rotation rate, the frequently considered one is the input axis misalignment between input reference axis which is perpendicular to the mounting surface and gyro angular rate input axis. But the misalignment angle between DRLG dither axis and gyro angular rate input axis is often ignored by researchers, which is amplified by dither coupling problem and that would lead to negative effects especially in high accuracy SINS. In order to study the problem more clearly, the concept of misalignment between DRLG dither axis and gyro angle rate input axis is researched. Considering the error of misalignment is of the order of 10-3 rad. or even smaller, the best way to measure it is using DRLG itself by means of an angle exciter as an auxiliary. In this paper, the concept of dither axis misalignment is explained explicitly firstly, based on this, the frequency of angle exciter is induced as reference parameter, when DRLG is mounted on the angle exciter in a certain angle, the projections of angle exciter rotation rate and mechanical oscillation rate on the gyro input axis are both sensed by DRLG. If the dither axis has misalignment error with the gyro input axis, there will be four major frequencies detected: the frequency of angle exciter, the dither mechanical frequency, sum and difference frequencies of the former two frequencies. Then the amplitude spectrum of DRLG output signal obtained by the using LabVIEW program. if there are only angle exciter and the dither mechanical frequencies, the misalignment may be too small to be detected, otherwise, the amplitude of the sum and difference frequencies will show the misalignment angle between the gyro angle rate input axis and the dither axis. Finally, some related parameters such as frequency and amplitude of the angle exciter and sample rate are calculated and the results are analyzed. The simulation and experiment result prove the effectiveness of the proposed method..

  12. Optical sensor in planar configuration based on multimode interference

    NASA Astrophysics Data System (ADS)

    Blahut, Marek

    2017-08-01

    In the paper a numerical analysis of optical sensors based on multimode interference in planar one-dimensional step-index configuration is presented. The structure consists in single-mode input and output waveguides and multimode waveguide which guide only few modes. Material parameters discussed refer to a SU8 polymer waveguide on SiO2 substrate. The optical system described will be designed to the analysis of biological substances.

  13. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  14. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  15. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  16. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  17. Calculation of Stress Intensity Factors for Interfacial Cracks in Fiber Metal Laminates

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    2009-01-01

    Stress intensity factors for interfacial cracks in Fiber Metal Laminates (FML) are computed by using the displacement ratio method recently developed by Sun and Qian (1997, Int. J. Solids. Struct. 34, 2595-2609). Various FML configurations with single and multiple delaminations subjected to different loading conditions are investigated. The displacement ratio method requires the total energy release rate, bimaterial parameters, and relative crack surface displacements as input. Details of generating the energy release rates, defining bimaterial parameters with anisotropic elasticity, and selecting proper crack surface locations for obtaining relative crack surface displacements are discussed in the paper. Even though the individual energy release rates are nonconvergent, mesh-size-independent stress intensity factors can be obtained. This study also finds that the selection of reference length can affect the magnitudes and the mode mixity angles of the stress intensity factors; thus, it is important to report the reference length used with the calculated stress intensity factors.

  18. Automatic dynamic range adjustment for ultrasound B-mode imaging.

    PubMed

    Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo

    2015-02-01

    In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The radiometric characteristics of KOMPSAT-3A by using reference radiometric tarps and ground measurement

    NASA Astrophysics Data System (ADS)

    Yeom, Jong-Min

    2016-09-01

    In this study, we performed the vicarious radiometric calibration of KOMPSAT-3A multispectral bands by using 6S radiative transfer model, radiometric tarps, MFRSR measurements. Furthermore, to prepare the accurate input parameter, we also did experiment work to measure the BRDF of radiometric tarps based on hyperspectral gonioradiometer to compensate the observation geometry difference between satellite and ASD Fieldspec 3. Also, we measured point spread function (PSF) by using the bright star and corrected multispectral bands based on the Wiener filter. For accurate atmospheric constituent effects such as aerosol optical depth, column water, and total ozone, we used MFRSR instrument and estimated related optical depth of each gases. Based on input parameters for 6S radiative transfer model, we simulated top of atmosphere (TOA) radiance by observed by KOMPSAT-3A and matched-up the digital number. Consequently, DN to radiance coefficients was determined based on aforementioned methods and showed reasonable statistics results.

  20. Mathematical modeling of a four-stroke resonant engine for micro and mesoscale applications

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2014-12-01

    In order to mitigate frictional and leakage losses in small scale engines, a compliant engine design is proposed in which the piston in cylinder arrangement is replaced by a flexible cavity. A physics-based nonlinear lumped-parameter model is derived to predict the performance of a prototype engine. The model showed that the engine performance depends on input parameters, such as heat input, heat loss, and load on the engine. A sample simulation for a reference engine with octane fuel/air ratio of 0.043 resulted in an indicated thermal efficiency of 41.2%. For a fixed fuel/air ratio, higher output power is obtained for smaller loads and vice-versa. The heat loss from the engine and the work done on the engine during the intake stroke are found to decrease the indicated thermal efficiency. The ratio of friction work to indicated work in the prototype engine is about 8%, which is smaller in comparison to the traditional reciprocating engines.

  1. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  2. Estimating the Celestial Reference Frame via Intra-Technique Combination

    NASA Astrophysics Data System (ADS)

    Iddink, Andreas; Artz, Thomas; Halsig, Sebastian; Nothnagel, Axel

    2016-12-01

    One of the primary goals of Very Long Baseline Interferometry (VLBI) is the determination of the International Celestial Reference Frame (ICRF). Currently the third realization of the internationally adopted CRF, the ICRF3, is under preparation. In this process, various optimizations are planned to realize a CRF that does not benefit only from the increased number of observations since the ICRF2 was published. The new ICRF can also benefit from an intra-technique combination as is done for the Terrestrial Reference Frame (TRF). Here, we aim at estimating an optimized CRF by means of an intra-technique combination. The solutions are based on the input to the official combined product of the International VLBI Service for Geodesy and Astrometry (IVS), also providing the radio source parameters. We discuss the differences in the setup using a different number of contributions and investigate the impact on TRF and CRF as well as on the Earth Orientation Parameters (EOPs). Here, we investigate the differences between the combined CRF and the individual CRFs from the different analysis centers.

  3. Anthropogenic activities impact on atmospheric environmental quality in a gas-flaring community: application of fuzzy logic modelling concept.

    PubMed

    Akintola, Olayiwola Akin; Sangodoyin, Abimbola Yisau; Agunbiade, Foluso Oyedotun

    2018-05-24

    We present a modelling concept for evaluating the impacts of anthropogenic activities suspected to be from gas flaring on the quality of the atmosphere using domestic roof-harvested rainwater (DRHRW) as indicator. We analysed seven metals (Cu, Cd, Pb, Zn, Fe, Ca, and Mg) and six water quality parameters (acidity, PO 4 3- , SO 4 2- , NO 3 - , Cl - , and pH). These were used as input parameters in 12 sampling points from gas-flaring environments (Port Harcourt, Nigeria) using Ibadan as reference. We formulated the results of these input parameters into membership function fuzzy matrices based on four degrees of impact: extremely high, high, medium, and low, using regulatory limits as criteria. We generated indices that classified the degree of anthropogenic activity impact on the sites from the product membership function matrices and weight matrices, with investigated (gas-flaring) environment as between medium and high impact compared to those from reference (residential) environment that was classified as between low and medium impact. Major contaminants of concern found in the harvested rainwater were Pb and Cd. There is also the urgent need to stop gas-flaring activities in Port Harcourt area in particular and Niger Delta region of Nigeria in general, so as to minimise the untold health hazard that people living in the area are currently faced with. The fuzzy methodology presented has also indicated that the water cannot safely support potable uses and should not be consumed without purification due to the impact of anthropogenic activities in the area but may be useful for other domestic purposes.

  4. Global retrieval of soil moisture and vegetation properties using data-driven methods

    NASA Astrophysics Data System (ADS)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann

    2017-04-01

    Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre for Medium-Range Weather Forecasts (ECMWF). Finally, retrievals using radiative transfer models can also be used as a reference SM dataset for the training phase. This approach was used to retrieve soil moisture from ASMR-E, as mentioned above, and also to implement the official European Space Agency (ESA) SMOS soil moisture product in Near-Real-Time. We will finish with a discussion of the retrieval of vegetation parameters from SMOS observations using data-driven methods.

  5. Multinational Experiment 7. Outcome 3 - Cyber Domain. Objective 3.3: Concept Framework Version 3.0

    DTIC Science & Technology

    2012-10-03

    experimentation in order to give some parameters for Decision Makers’ actions. A.5 DIFFERENT LEGAL FRAMEWORKS The juridical framework to which we refer, in...material effects (e.g. psychological impact), economic et al, or, especially in the military field, it may affect Operational Security (OPSEC). 7...not expected at all to be run as a mechanistic tool that produces univocal outputs on the base of juridically qualified inputs, making unnecessary

  6. A simple method for simulating wind profiles in the boundary layer of tropical cyclones

    DOE PAGES

    Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; ...

    2016-11-01

    A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method alsomore » requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Lastly, temporal spectra from LES produce an inertial subrange for frequencies ≳0.1 Hz, but only when the horizontal grid spacing ≲20 m.« less

  7. A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; Zhang, Jun A.

    2017-03-01

    A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies ≳ 0.1 Hz, but only when the horizontal grid spacing ≲ 20 m.

  8. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    PubMed

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation problems and provide a measure of model performance which can be used in attempts to improve such models.

  10. Effect of heat and moisture transport and storage properties of building stones on the hygrothermal performance of historical building envelopes

    NASA Astrophysics Data System (ADS)

    KoÅáková, Dana; Kočí, Václav; Žumár, Jaromír; Keppert, Martin; Holčapek, Ondřej; Vejmelková, Eva; Černý, Robert

    2016-12-01

    The heat and moisture transport and storage parameters of three different natural stones used on the Czech territory since medieval times are determined experimentally, together with the basic physical properties and mechanical parameters. The measured data are applied as input parameters in the computational modeling of hygrothermal performance of building envelopes made of the analyzed stones. Test reference year climatic data of three different locations within the Czech Republic are used as boundary conditions on the exterior side. Using the simulated hygric and thermal performance of particular stone walls, their applicability is assessed in a relation to the geographical and climatic conditions. The obtained results indicate that all three investigated stones are highly resistant to weather conditions, freeze/thaw cycles in particular.

  11. Noninvasive k3 estimation method for slow dissociation PET ligands: application to [11C]Pittsburgh compound B.

    PubMed

    Sato, Koichi; Fukushi, Kiyoshi; Shinotoh, Hitoshi; Shimada, Hitoshi; Hirano, Shigeki; Tanaka, Noriko; Suhara, Tetsuya; Irie, Toshiaki; Ito, Hiroshi

    2013-11-16

    Recently, we reported an information density theory and an analysis of three-parameter plus shorter scan than conventional method (3P+) for the amyloid-binding ligand [11C]Pittsburgh compound B (PIB) as an example of a non-highly reversible positron emission tomography (PET) ligand. This article describes an extension of 3P + analysis to noninvasive '3P++' analysis (3P + plus use of a reference tissue for input function). In 3P++ analysis for [11C]PIB, the cerebellum was used as a reference tissue (negligible specific binding). Fifteen healthy subjects (NC) and fifteen Alzheimer's disease (AD) patients participated. The k3 (index of receptor density) values were estimated with 40-min PET data and three-parameter reference tissue model and were compared with that in 40-min 3P + analysis as well as standard 90-min four-parameter (4P) analysis with arterial input function. Simulation studies were performed to explain k3 biases observed in 3P++ analysis. Good model fits of 40-min PET data were observed in both reference and target regions-of-interest (ROIs). High linear intra-subject (inter-15 ROI) correlations of k3 between 3P++ (Y-axis) and 3P + (X-axis) analyses were shown in one NC (r2 = 0.972 and slope = 0.845) and in one AD (r2 = 0.982, slope = 0.655), whereas inter-subject k3 correlations in a target region (left lateral temporal cortex) from 30 subjects (15 NC + 15 AD) were somewhat lower (r2 = 0.739 and slope = 0.461). Similar results were shown between 3P++ and 4P analyses: r2 = 0.953 for intra-subject k3 in NC, r2 = 0.907 for that in AD and r2 = 0.711 for inter-30 subject k3. Simulation studies showed that such lower inter-subject k3 correlations and significant negative k3 biases were not due to unstableness of 3P++ analysis but rather to inter-subject variation of both k2 (index of brain-to-blood transport) and k3 (not completely negligible) in the reference region. In [11C]PIB, the applicability of 3P++ analysis may be restricted to intra-subject comparison such as follow-up studies. The 3P++ method itself is thought to be robust and may be more applicable to other non-highly reversible PET ligands with ideal reference tissue.

  12. Low-noise pulse conditioner

    DOEpatents

    Bird, David A.

    1983-01-01

    A low-noise pulse conditioner is provided for driving electronic digital processing circuitry directly from differentially induced input pulses. The circuit uses a unique differential-to-peak detector circuit to generate a dynamic reference signal proportional to the input peak voltage. The input pulses are compared with the reference signal in an input network which operates in full differential mode with only a passive input filter. This reduces the introduction of circuit-induced noise, or jitter, generated in ground referenced input elements normally used in pulse conditioning circuits, especially speed transducer processing circuits.

  13. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  14. Suitability of [18F]altanserin and PET to determine 5-HT2A receptor availability in the rat brain: in vivo and in vitro validation of invasive and non-invasive kinetic models.

    PubMed

    Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas

    2013-08-01

    While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.

  15. Development of Object-Based Teleoperator Control for Unstructured Applications

    DTIC Science & Technology

    1996-12-01

    4-23 5.1. Module Sampling Rates of Test Set #5 in Appendix C 5-7 A.1. PUMA 560 D-H parameters ....... .................... A-2 A.2. ROBOTICA Input...June, 1996. 33. Schneider, D. L., EENG 540 Class Notes, 1994. 34. Nethery, John, Robotica : User’s guide and reference manual, University of Illnois...case of PUMA robot. First, the overall forward kinematics were computed using the ROBOTICA mathematic software [34], then some of joints are set to be

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebetrau, A.M.

    Work is underway at Pacific Northwest Laboratory (PNL) to improve the probabilistic analysis used to model pressurized thermal shock (PTS) incidents in reactor pressure vessels, and, further, to incorporate these improvements into the existing Vessel Integrity Simulation Analysis (VISA) code. Two topics related to work on input distributions in VISA are discussed in this paper. The first involves the treatment of flaw size distributions and the second concerns errors in the parameters in the (Guthrie) equation which is used to compute ..delta..RT/sub NDT/, the shift in reference temperature for nil ductility transition.

  17. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    NASA Astrophysics Data System (ADS)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)<0.2 mag. However, with larger input E(B-V), the biases of the output parameter do not converge to zero even at the highest S/N and are strongly affected by the shape of the error spectra. This effect is particularly dramatic for youngest age (t = 108yr), for which all population parameters can be strongly different from the input values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  18. TU-H-207A-02: Relative Importance of the Various Factors Influencing the Accuracy of Monte Carlo Simulated CT Dose Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marous, L; Muryn, J; Liptak, C

    2016-06-15

    Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less

  19. MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft

    PubMed Central

    Zhang, Jing

    2015-01-01

    This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839

  20. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  1. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  2. Fracture and Viscoelastic Characteristics of the Human Cervical Spine,

    DTIC Science & Technology

    1986-01-01

    to which the three hydraulic actuators are attached. Parameters ES and TS are input by the user to define the reference point T. The reference point T...S ILOLUS Z 𔃾 - - -~ -I-- :~ a I" eU ,. eS U wNiU ,’, -.. |-- • i *1 S U U, N. S S S S S U U a U S S 0 w * S ’I * & S * S Uz N* I p. z...a ’I 0m Me tno -$. satec Rotation (Psi. t~~5. P~i) O.T~a a1~ .~S lp 0 Angle I Max vert 2 Max hor:i d ’ (eg) range (mm) range (mm) I[@1 0 127 .00 330

  3. Common source cascode amplifiers for integrating IR-FPA applications

    NASA Technical Reports Server (NTRS)

    Woolaway, James T.; Young, Erick T.

    1989-01-01

    Space based astronomical infrared measurements present stringent performance requirements on the infrared detector arrays and their associated readout circuitry. To evaluate the usefulness of commercial CMOS technology for astronomical readout applications a theoretical and experimental evaluation was performed on source follower and common-source cascode integrating amplifiers. Theoretical analysis indicates that for conditions where the input amplifier integration capacitance is limited by the detectors capacitance the input referred rms noise electrons of each amplifier should be equivalent. For conditions of input gate limited capacitance the source follower should provide lower noise. Measurements of test circuits containing both source follower and common source cascode circuits showed substantially lower input referred noise for the common-source cascode input circuits. Noise measurements yielded 4.8 input referred rms noise electrons for an 8.5 minute integration. The signal and noise gain of the common-source cascode amplifier appears to offer substantial advantages in acheiving predicted noise levels.

  4. Systematic flood modelling to support flood-proof urban design

    NASA Astrophysics Data System (ADS)

    Bruwier, Martin; Mustafa, Ahmed; Aliaga, Daniel; Archambeau, Pierre; Erpicum, Sébastien; Nishida, Gen; Zhang, Xiaowei; Pirotton, Michel; Teller, Jacques; Dewals, Benjamin

    2017-04-01

    Urban flood risk is influenced by many factors such as hydro-meteorological drivers, existing drainage systems as well as vulnerability of population and assets. The urban fabric itself has also a complex influence on inundation flows. In this research, we performed a systematic analysis on how various characteristics of urban patterns control inundation flow within the urban area and upstream of it. An urban generator tool was used to generate over 2,250 synthetic urban networks of 1 km2. This tool is based on the procedural modelling presented by Parish and Müller (2001) which was adapted to generate a broader variety of urban networks. Nine input parameters were used to control the urban geometry. Three of them define the average length, orientation and curvature of the streets. Two orthogonal major roads, for which the width constitutes the fourth input parameter, work as constraints to generate the urban network. The width of secondary streets is given by the fifth input parameter. Each parcel generated by the street network based on a parcel mean area parameter can be either a park or a building parcel depending on the park ratio parameter. Three setback parameters constraint the exact location of the building whithin a building parcel. For each of synthetic urban network, detailed two-dimensional inundation maps were computed with a hydraulic model. The computational efficiency was enhanced by means of a porosity model. This enables the use of a coarser computational grid , while preserving information on the detailed geometry of the urban network (Sanders et al. 2008). These porosity parameters reflect not only the void fraction, which influences the storage capacity of the urban area, but also the influence of buildings on flow conveyance (dynamic effects). A sensitivity analysis was performed based on the inundation maps to highlight the respective impact of each input parameter characteristizing the urban networks. The findings of the study pinpoint which properties of urban networks have a major influence on urban inundation flow, enabling better informed flood-proof urban design. References: Parish, Y. I. H., Muller, P. 2001. Procedural modeling of cities. SIGGRAPH, pp. 301—308. Sanders, B.F., Schubert, J.E., Gallegos, H.A., 2008. Integral formulation of shallow-water equations with anisotropic porosity for urban flood modeling. Journal of Hydrology 362, 19-38. Acknowledgements: The research was funded through the ARC grant for Concerted Research Actions, financed by the Wallonia-Brussels Federation.

  5. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  6. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  7. Integration of altitude and airspeed information into a primary flight display via moving-tape formats: Evaluation during random tracking task

    NASA Technical Reports Server (NTRS)

    Abbott, Terence S.; Nataupsky, Mark; Steinmetz, George G.

    1987-01-01

    A ground-based aircraft simulation study was conducted to determine the effects on pilot preference and performance of integrating airspeed and altitude information into an advanced electronic primary flight display via moving-tape (linear moving scale) formats. Several key issues relating to the implementation of moving-tape formats were examined in this study: tape centering, tape orientation, and trend information. The factor of centering refers to whether the tape was centered about the actual airspeed or altitude or about some other defined reference value. Tape orientation refers to whether the represented values are arranged in descending or ascending order. Two pilots participated in this study, with each performing 32 runs along seemingly random, previously unknown flight profiles. The data taken, analyzed, and presented consisted of path performance parameters, pilot-control inputs, and electrical brain response measurements.

  8. Influence of primary fragment excitation energy and spin distributions on fission observables

    NASA Astrophysics Data System (ADS)

    Litaize, Olivier; Thulliez, Loïc; Serot, Olivier; Chebboubi, Abdelaziz; Tamagno, Pierre

    2018-03-01

    Fission observables in the case of 252Cf(sf) are investigated by exploring several models involved in the excitation energy sharing and spin-parity assignment between primary fission fragments. In a first step the parameters used in the FIFRELIN Monte Carlo code "reference route" are presented: two parameters for the mass dependent temperature ratio law and two constant spin cut-off parameters for light and heavy fragment groups respectively. These parameters determine the initial fragment entry zone in excitation energy and spin-parity (E*, Jπ). They are chosen to reproduce the light and heavy average prompt neutron multiplicities. When these target observables are achieved all other fission observables can be predicted. We show here the influence of input parameters on the saw-tooth curve and we discuss the influence of a mass and energy-dependent spin cut-off model on gamma-rays related fission observables. The part of the model involving level densities, neutron transmission coefficients or photon strength functions remains unchanged.

  9. Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model

    NASA Astrophysics Data System (ADS)

    Washington, M. H.; Kumar, S.

    2017-12-01

    The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.

  10. Full wave modulator-demodulator amplifier apparatus. [for generating rectified output signal

    NASA Technical Reports Server (NTRS)

    Black, J. M. (Inventor)

    1974-01-01

    A full-wave modulator-demodulator apparatus is described including an operational amplifier having a first input terminal coupled to a circuit input terminal, and a second input terminal alternately coupled to the circuit input terminal. A circuit is ground by a switching circuit responsive to a phase reference signal and the operational amplifier is alternately switched between a non-inverting mode and an inverting mode. The switching circuit includes three field-effect transistors operatively associated to provide the desired switching function in response to an alternating reference signal of the same frequency as an AC input signal applied to the circuit input terminal.

  11. Analysis of the LSC microbunching instability in MaRIE linac reference design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yampolsky, Nikolai

    In this report we estimate the effect of the microbunching instability in the MaRIE XFEL linac. The reference design for the linac is described in a separate report. The parameters of the L1, L2, and L3 linacs as well as BC1 and BC2 bunch compressors were the same as in the referenced report. The beam dynamics was assumed to be linear along the accelerator (which is a reasonable assumption for estimating the effect of the microbunching instability). The parameters of the bunch also match the parameters described in the referenced report. Additionally, it was assumed that the beam radius ismore » equal to R = 100 m and does not change along linac. This assumption needs to be revisited at later studies. The beam dynamics during acceleration was accounted in the matrix formalism using a Matlab code. The input parameters for the linacs are: RF peak gradient, RF frequency, RF phase, linac length, and initial beam energy. The energy gain and the imposed chirp are calculated based on the RF parameters self-consistently. The bunch compressors are accounted in the matrix formalism as well. Each chicane is characterized by the beam energy and the R56 matrix element. It was confirmed that the linac and beam parameters described previously provide two-stage bunch compression with compression ratios of 10 and 20 resulting in the bunch of 3kA peak current.« less

  12. Stable modeling based control methods using a new RBF network.

    PubMed

    Beyhan, Selami; Alci, Musa

    2010-10-01

    This paper presents a novel model with radial basis functions (RBFs), which is applied successively for online stable identification and control of nonlinear discrete-time systems. First, the proposed model is utilized for direct inverse modeling of the plant to generate the control input where it is assumed that inverse plant dynamics exist. Second, it is employed for system identification to generate a sliding-mode control input. Finally, the network is employed to tune PID (proportional + integrative + derivative) controller parameters automatically. The adaptive learning rate (ALR), which is employed in the gradient descent (GD) method, provides the global convergence of the modeling errors. Using the Lyapunov stability approach, the boundedness of the tracking errors and the system parameters are shown both theoretically and in real time. To show the superiority of the new model with RBFs, its tracking results are compared with the results of a conventional sigmoidal multi-layer perceptron (MLP) neural network and the new model with sigmoid activation functions. To see the real-time capability of the new model, the proposed network is employed for online identification and control of a cascaded parallel two-tank liquid-level system. Even though there exist large disturbances, the proposed model with RBFs generates a suitable control input to track the reference signal better than other methods in both simulations and real time. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Single-particle strength from nucleon transfer in oxygen isotopes: Sensitivity to model parameters

    NASA Astrophysics Data System (ADS)

    Flavigny, F.; Keeley, N.; Gillibert, A.; Obertelli, A.

    2018-03-01

    In the analysis of transfer reaction data to extract nuclear structure information the choice of input parameters to the reaction model such as distorting potentials and overlap functions has a significant impact. In this paper we consider a set of data for the (d ,t ) and (d ,3He ) reactions on 14,16,18O as a well-delimited subject for a study of the sensitivity of such analyses to different choices of distorting potentials and overlap functions with particular reference to a previous investigation of the variation of valence nucleon correlations as a function of the difference in nucleon separation energy Δ S =| Sp-Sn| [Phys. Rev. Lett. 110, 122503 (2013), 10.1103/PhysRevLett.110.122503].

  14. Applying an orographic precipitation model to improve mass balance modeling of the Juneau Icefield, AK

    NASA Astrophysics Data System (ADS)

    Roth, A. C.; Hock, R.; Schuler, T.; Bieniek, P.; Aschwanden, A.

    2017-12-01

    Mass loss from glaciers in Southeast Alaska is expected to alter downstream ecological systems as runoff patterns change. To investigate these potential changes under future climate scenarios, distributed glacier mass balance modeling is required. However, the spatial resolution gap between global or regional climate models and the requirements for glacier mass balance modeling studies must be addressed first. We have used a linear theory of orographic precipitation model to downscale precipitation from both the Weather Research and Forecasting (WRF) model and ERA-Interim to the Juneau Icefield region over the period 1979-2013. This implementation of the LT model is a unique parameterization that relies on the specification of snow fall speed and rain fall speed as tuning parameters to calculate the cloud time delay, τ. We assessed the LT model results by considering winter precipitation so the effect of melt was minimized. The downscaled precipitation pattern produced by the LT model captures the orographic precipitation pattern absent from the coarse resolution WRF and ERA-Interim precipitation fields. Observational data constraints limited our ability to determine a unique parameter combination and calibrate the LT model to glaciological observations. We established a reference run of parameter values based on literature and performed a sensitivity analysis of the LT model parameters, horizontal resolution, and climate input data on the average winter precipitation. The results of the reference run showed reasonable agreement with the available glaciological measurements. The precipitation pattern produced by the LT model was consistent regardless of parameter combination, horizontal resolution, and climate input data, but the precipitation amount varied strongly with these factors. Due to the consistency of the winter precipitation pattern and the uncertainty in precipitation amount, we suggest a precipitation index map approach to be used in combination with a distributed mass balance model for future mass balance modeling studies of the Juneau Icefield. The LT model has potential to be used in other regions in Alaska and elsewhere with strong orographic effects for improved glacier mass balance modeling and/or hydrological modeling.

  15. Preliminary investigation of the effects of eruption source parameters on volcanic ash transport and dispersion modeling using HYSPLIT

    NASA Astrophysics Data System (ADS)

    Stunder, B.

    2009-12-01

    Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.

  16. Low-noise pulse conditioner

    DOEpatents

    Bird, D.A.

    1981-06-16

    A low-noise pulse conditioner is provided for driving electronic digital processing circuitry directly from differentially induced input pulses. The circuit uses a unique differential-to-peak detector circuit to generate a dynamic reference signal proportional to the input peak voltage. The input pulses are compared with the reference signal in an input network which operates in full differential mode with only a passive input filter. This reduces the introduction of circuit-induced noise, or jitter, generated in ground referenced input elements normally used in pulse conditioning circuits, especially speed transducer processing circuits. This circuit may be used for conditioning the sensor signal from the Fidler coil in a gas centrifuge for separation of isotopic gaseous mixtures.

  17. Noninvasive k3 estimation method for slow dissociation PET ligands: application to [11C]Pittsburgh compound B

    PubMed Central

    2013-01-01

    Background Recently, we reported an information density theory and an analysis of three-parameter plus shorter scan than conventional method (3P+) for the amyloid-binding ligand [11C]Pittsburgh compound B (PIB) as an example of a non-highly reversible positron emission tomography (PET) ligand. This article describes an extension of 3P + analysis to noninvasive ‘3P++’ analysis (3P + plus use of a reference tissue for input function). Methods In 3P++ analysis for [11C]PIB, the cerebellum was used as a reference tissue (negligible specific binding). Fifteen healthy subjects (NC) and fifteen Alzheimer's disease (AD) patients participated. The k3 (index of receptor density) values were estimated with 40-min PET data and three-parameter reference tissue model and were compared with that in 40-min 3P + analysis as well as standard 90-min four-parameter (4P) analysis with arterial input function. Simulation studies were performed to explain k3 biases observed in 3P++ analysis. Results Good model fits of 40-min PET data were observed in both reference and target regions-of-interest (ROIs). High linear intra-subject (inter-15 ROI) correlations of k3 between 3P++ (Y-axis) and 3P + (X-axis) analyses were shown in one NC (r2 = 0.972 and slope = 0.845) and in one AD (r2 = 0.982, slope = 0.655), whereas inter-subject k3 correlations in a target region (left lateral temporal cortex) from 30 subjects (15 NC + 15 AD) were somewhat lower (r2 = 0.739 and slope = 0.461). Similar results were shown between 3P++ and 4P analyses: r2 = 0.953 for intra-subject k3 in NC, r2 = 0.907 for that in AD and r2 = 0.711 for inter-30 subject k3. Simulation studies showed that such lower inter-subject k3 correlations and significant negative k3 biases were not due to unstableness of 3P++ analysis but rather to inter-subject variation of both k2 (index of brain-to-blood transport) and k3 (not completely negligible) in the reference region. Conclusions In [11C]PIB, the applicability of 3P++ analysis may be restricted to intra-subject comparison such as follow-up studies. The 3P++ method itself is thought to be robust and may be more applicable to other non-highly reversible PET ligands with ideal reference tissue. PMID:24238306

  18. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  19. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qin, Qing; Wang, Jiang; Yu, Haitao

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less

  20. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  1. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  2. Predictors of early person reference development: maternal language input, attachment and neurodevelopmental markers.

    PubMed

    Lemche, Erwin; Joraschky, Peter; Klann-Delius, Gisela

    2013-12-01

    In a longitudinal natural language development study in Germany, the acquisition of verbal symbols for present persons, absent persons, inanimate things and the mother-toddler dyad was investigated. Following the notion that verbal referent use is more developed in ostensive contexts, symbolic play situations were coded for verbal person reference by means of noun and pronoun use. Depending on attachment classifications at twelve months of age, effects of attachment classification and maternal language input were studied up to 36 months in four time points. Hierarchical regression analyses revealed that, except for mother absence, maternal verbal referent input rates at 17 and 36 months were stronger predictors for all referent types than any of the attachment organizations, or any other social or biological predictor variable. Attachment effects accounted for up to 9.8% of unique variance proportions in the person reference variables. Perinatal and familial measures predicted person references dependent on reference type. The results of this investigation indicate that mother-reference, self-reference and thing-reference develop in similar quantities measured from the 17-month time point, but are dependent of attachment quality. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Assessment of the possible contribution of space ties on-board GNSS satellites to the terrestrial reference frame

    NASA Astrophysics Data System (ADS)

    Bruni, Sara; Rebischung, Paul; Zerbini, Susanna; Altamimi, Zuheir; Errico, Maddalena; Santi, Efisio

    2018-04-01

    The realization of the international terrestrial reference frame (ITRF) is currently based on the data provided by four space geodetic techniques. The accuracy of the different technique-dependent materializations of the frame physical parameters (origin and scale) varies according to the nature of the relevant observables and to the impact of technique-specific errors. A reliable computation of the ITRF requires combining the different inputs, so that the strengths of each technique can compensate for the weaknesses of the others. This combination, however, can only be performed providing some additional information which allows tying together the independent technique networks. At present, the links used for that purpose are topometric surveys (local/terrestrial ties) available at ITRF sites hosting instruments of different techniques. In principle, a possible alternative could be offered by spacecrafts accommodating the positioning payloads of multiple geodetic techniques realizing their co-location in orbit (space ties). In this paper, the GNSS-SLR space ties on-board GPS and GLONASS satellites are thoroughly examined in the framework of global reference frame computations. The investigation focuses on the quality of the realized physical frame parameters. According to the achieved results, the space ties on-board GNSS satellites cannot, at present, substitute terrestrial ties in the computation of the ITRF. The study is completed by a series of synthetic simulations investigating the impact that substantial improvements in the volume and quality of SLR observations to GNSS satellites would have on the precision of the GNSS frame parameters.

  4. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  5. Hybrid supervisory control using recurrent fuzzy neural network for tracking periodic inputs.

    PubMed

    Lin, F J; Wai, R J; Hong, C M

    2001-01-01

    A hybrid supervisory control system using a recurrent fuzzy neural network (RFNN) is proposed to control the mover of a permanent magnet linear synchronous motor (PMLSM) servo drive for the tracking of periodic reference inputs. First, the field-oriented mechanism is applied to formulate the dynamic equation of the PMLSM. Then, a hybrid supervisory control system, which combines a supervisory control system and an intelligent control system, is proposed to control the mover of the PMLSM for periodic motion. The supervisory control law is designed based on the uncertainty bounds of the controlled system to stabilize the system states around a predefined bound region. Since the supervisory control law will induce excessive and chattering control effort, the intelligent control system is introduced to smooth and reduce the control effort when the system states are inside the predefined bound region. In the intelligent control system, the RFNN control is the main tracking controller which is used to mimic a idea control law and a compensated control is proposed to compensate the difference between the idea control law and the RFNN control. The RFNN has the merits of fuzzy inference, dynamic mapping and fast convergence speed, In addition, an online parameter training methodology, which is derived using the Lyapunov stability theorem and the gradient descent method, is proposed to increase the learning capability of the RFNN. The proposed hybrid supervisory control system using RFNN can track various periodic reference inputs effectively with robust control performance.

  6. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis

    PubMed Central

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  7. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis.

    PubMed

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.

  8. Human health risk assessment of lead from mining activities at semi-arid locations in the context of total lead exposure.

    PubMed

    Zheng, Jiajia; Huynh, Trang; Gasparon, Massimo; Ng, Jack; Noller, Barry

    2013-12-01

    Lead from historical mining and mineral processing activities may pose potential human health risks if materials with high concentrations of bioavailable lead minerals are released to the environment. Since the Joint Expert Committee on Food Additives of Food and Agriculture Organization/World Health Organization withdrew the Provisional Tolerable Weekly Intake of lead in 2011, an alternative method was required for lead exposure assessment. This study evaluated the potential lead hazard to young children (0-7 years) from a historical mining location at a semi-arid area using the U.S. EPA Integrated Exposure Uptake Biokinetic (IEUBK) Model, with selected site-specific input data. This study assessed lead exposure via the inhalation pathway for children living in a location affected by lead mining activities and with specific reference to semi-arid conditions and made comparison with the ingestion pathway by using the physiologically based extraction test for gastro-intestinal simulation. Sensitivity analysis for major IEUBK input parameters was conducted. Three groups of input parameters were classified according to the results of predicted blood concentrations. The modelled lead absorption attributed to the inhalation route was lower than 2 % (mean ± SE, 0.9 % ± 0.1 %) of all lead intake routes and was demonstrated as a less significant exposure pathway to children's blood, compared with ingestion. Whilst dermal exposure was negligible, diet and ingestion of soil and dust were the dominant parameters in terms of children's blood lead prediction. The exposure assessment identified the changing role of dietary intake when house lead loadings varied. Recommendations were also made to conduct comprehensive site-specific human health risk assessment in future studies of lead exposure under a semi-arid climate.

  9. System and methods for reducing harmonic distortion in electrical converters

    DOEpatents

    Kajouke, Lateef A; Perisic, Milun; Ransom, Ray M

    2013-12-03

    Systems and methods are provided for delivering energy using an energy conversion module. An exemplary method for delivering energy from an input interface to an output interface using an energy converison module coupled between the input interface and the output interface comprises the steps of determining an input voltage reference for the input interface based on a desired output voltage and a measured voltage and the output interface, determining a duty cycle control value based on a ratio of the input voltage reference and the measured voltage, operating one or more switching elements of the energy conversion module to deliver energy from the input interface to the output interface to the output interface with a duty cycle influenced by the dute cycle control value.

  10. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    NASA Astrophysics Data System (ADS)

    Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.

    2007-02-01

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.

  11. Investigation of Helicon discharges as RF coupling concept of negative hydrogen ion sources

    NASA Astrophysics Data System (ADS)

    Briefi, S.; Fantz, U.

    2013-02-01

    The ITER reference source for H- and D- requires a high RF input power (up to 90 kW per driver). To reduce the demands on the RF circuit, it is highly desirable to reduce the power consumption while retaining the values of the relevant plasma parameters namely the positive ion density and the atomic hydrogen density. Helicon plasmas are a promising alternative RF coupling concept but they are typically generated in long thin discharge tubes using rare gases and an RF frequency of 13.56 MHz. Hence the applicability to the ITER reference source geometry, frequency and the utilization of hydrogen/deuterium has to be proved. In this paper the strategy of the approach for using Helicon discharges for ITER reference source parameters is introduced and the first promising measurements which were carried out at a small laboratory experiment are presented. With increasing RF power a mode transition to the Helicon regime was observed for argon and argon/hydrogen mixtures. In pure hydrogen/deuterium the mode transition could not yet be achieved as the available RF power is too low. In deuterium a special feature of Helicon discharges, the socalled low field peak, could be observed at a moderate B-field of 3 mT.

  12. Fuzzy logic controller optimization

    DOEpatents

    Sepe, Jr., Raymond B; Miller, John Michael

    2004-03-23

    A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.

  13. Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT.

    PubMed

    Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel

    2010-02-01

    To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.

  14. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  15. Method and apparatus for measuring response time

    DOEpatents

    Johanson, Edward W.; August, Charles

    1985-01-01

    A method of measuring the response time of an electrical instrument which generates an output signal in response to the application of a specified input, wherein the output signal varies as a function of time and when subjected to a step input approaches a steady-state value, comprises the steps of: (a) applying a step input of predetermined value to the electrical instrument to generate an output signal; (b) simultaneously starting a timer; (c) comparing the output signal to a reference signal to generate a stop signal when the output signal is substantially equal to the reference signal, the reference signal being a specified percentage of the steady-state value of the output signal corresponding to the predetermined value of the step input; and (d) applying the stop signal when generated to stop the timer.

  16. Method and apparatus for measuring response time

    DOEpatents

    Johanson, E.W.; August, C.

    1983-08-11

    A method of measuring the response time of an electrical instrument which generates an output signal in response to the application of a specified input, wherein the output signal varies as a function of time and when subjected to a step input approaches a steady-state value, comprises the steps of: (a) applying a step input of predetermined value to the electrical instrument to generate an output signal; (b) simultaneously starting a timer; (c) comparing the output signal to a reference signal to generate a stop signal when the output signal is substantially equal to the reference signal, the reference signal being a specified percentage of the steady-state value of the output signal corresponding to the predetermined value of the step input; and (d) applying the stop signal when generated to stop the timer.

  17. An update to the analysis of the Canadian Spatial Reference System

    NASA Astrophysics Data System (ADS)

    Ferland, R.; Piraszewski, M.; Craymer, M.

    2015-12-01

    The primary objective of the Canadian Spatial Reference System (CSRS) is to provide users access to a consistent geo-referencing infrastructure over the Canadian landmass. Global Navigation Satellite System (GNSS) positioning accuracy requirements ranges from meter level to mm level (e.g.: crustal deformation). The highest level of the Canadian infrastructure consist of a network of continually operating GPS and GNSS receivers, referred to as active control stations. The network includes all Canadian public active control stations, some bordering US CORS and Alaska stations, Greenland active control stations, as well as a selection of IGS reference frame stations. The Bernese analysis software is used for the daily processing and the combination into weekly solutions which form the basis for this analysis. IGS weekly final orbit, Earth Rotation parameters (ERP's) and coordinates products are used in the processing. For the more demanding users, the time dependant changes of station coordinates is often more important.All station coordinate estimates and related covariance information is used in this analysis. For each input solution, variance factor, translation, rotation and scale (and if needed their rates) or subsets of these are estimated. In the combination of these weekly solutions, station positions and velocities are estimated. Since the time series from the stations in these networks often experience changes in behavior, new (or reuse of) parameters are generally used in these situations. As is often the case with real data, unrealistic coordinates may occur. Automatic detection and removal of outliers is used in these cases. For the transformation, position and velocity parameters loose apriori estimates and uncertainties are provided. Alignment using the usual Helmert transformation to the latest IGb08 realization of ITRF is also performed during the adjustment.

  18. Description and availability of the SMARTS spectral model for photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Myers, Daryl R.; Gueymard, Christian A.

    2004-11-01

    Limited spectral response range of photocoltaic (PV) devices requires device performance be characterized with respect to widely varying terrestrial solar spectra. The FORTRAN code "Simple Model for Atmospheric Transmission of Sunshine" (SMARTS) was developed for various clear-sky solar renewable energy applications. The model is partly based on parameterizations of transmittance functions in the MODTRAN/LOWTRAN band model family of radiative transfer codes. SMARTS computes spectra with a resolution of 0.5 nanometers (nm) below 400 nm, 1.0 nm from 400 nm to 1700 nm, and 5 nm from 1700 nm to 4000 nm. Fewer than 20 input parameters are required to compute spectral irradiance distributions including spectral direct beam, total, and diffuse hemispherical radiation, and up to 30 other spectral parameters. A spreadsheet-based graphical user interface can be used to simplify the construction of input files for the model. The model is the basis for new terrestrial reference spectra developed by the American Society for Testing and Materials (ASTM) for photovoltaic and materials degradation applications. We describe the model accuracy, functionality, and the availability of source and executable code. Applications to PV rating and efficiency and the combined effects of spectral selectivity and varying atmospheric conditions are briefly discussed.

  19. Optimize Short Term load Forcasting Anomalous Based Feed Forward Backpropagation

    NASA Astrophysics Data System (ADS)

    Mulyadi, Y.; Abdullah, A. G.; Rohmah, K. A.

    2017-03-01

    This paper contains the Short-Term Load Forecasting (STLF) using artificial neural network especially feed forward back propagation algorithm which is particularly optimized in order to getting a reduced error value result. Electrical load forecasting target is a holiday that hasn’t identical pattern and different from weekday’s pattern, in other words the pattern of holiday load is an anomalous. Under these conditions, the level of forecasting accuracy will be decrease. Hence we need a method that capable to reducing error value in anomalous load forecasting. Learning process of algorithm is supervised or controlled, then some parameters are arranged before performing computation process. Momentum constant a value is set at 0.8 which serve as a reference because it has the greatest converge tendency. Learning rate selection is made up to 2 decimal digits. In addition, hidden layer and input component are tested in several variation of number also. The test result leads to the conclusion that the number of hidden layer impact on the forecasting accuracy and test duration determined by the number of iterations when performing input data until it reaches the maximum of a parameter value.

  20. Learning of spatio-temporal codes in a coupled oscillator system.

    PubMed

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  1. A parameter estimation subroutine package

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.; Nead, M. W.

    1978-01-01

    Linear least squares estimation and regression analyses continue to play a major role in orbit determination and related areas. In this report we document a library of FORTRAN subroutines that have been developed to facilitate analyses of a variety of estimation problems. Our purpose is to present an easy to use, multi-purpose set of algorithms that are reasonably efficient and which use a minimal amount of computer storage. Subroutine inputs, outputs, usage and listings are given along with examples of how these routines can be used. The following outline indicates the scope of this report: Section (1) introduction with reference to background material; Section (2) examples and applications; Section (3) subroutine directory summary; Section (4) the subroutine directory user description with input, output, and usage explained; and Section (5) subroutine FORTRAN listings. The routines are compact and efficient and are far superior to the normal equation and Kalman filter data processing algorithms that are often used for least squares analyses.

  2. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    NASA Astrophysics Data System (ADS)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  3. Partial and total actuator faults accommodation for input-affine nonlinear process plants.

    PubMed

    Mihankhah, Amin; Salmasi, Farzad R; Salahshoor, Karim

    2013-05-01

    In this paper, a new fault-tolerant control system is proposed for input-affine nonlinear plants based on Model Reference Adaptive System (MRAS) structure. The proposed method has the capability to accommodate both partial and total actuator failures along with bounded external disturbances. In this methodology, the conventional MRAS control law is modified by augmenting two compensating terms. One of these terms is added to eliminate the nonlinear dynamic, while the other is reinforced to compensate the distractive effects of the total actuator faults and external disturbances. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed method. Moreover, the control structure has good robustness capability against the parameter variation. The performance of this scheme is evaluated using a CSTR system and the results were satisfactory. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Surface wave excitation study

    NASA Astrophysics Data System (ADS)

    Burke, G. J.; King, R. J.; Miller, E. K.

    1984-09-01

    Relative communication efficiency (RCE) as defined by Fenwick and Weeks compares the field of a test antenna to that of a reference antenna at the same location for equal input plower to each antenna. Thus, RCE is similar to power gain but is definable in the presence of ground. The effectiveness of antennas in launching TM surface waves was compared. Antennas considered included the vertical dipole, monople on a ground stake, monopole on a radial-wire ground screen, Beverage antenna and vertical half rhombic. Since the performance of these antennas is strongly dependent on parameters such as the number wires in a ground screen or the length of a Beverage antenna, results are presented with parameters varying over a reasonable range. Thus, antenna performance can be weighed against the effort and limitations of construction.

  5. A meta-learning system based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  6. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  7. Validation and quantification of [18F]altanserin binding in the rat brain using blood input and reference tissue modeling

    PubMed Central

    Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I

    2011-01-01

    The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562

  8. Deep supervised dictionary learning for no-reference image quality assessment

    NASA Astrophysics Data System (ADS)

    Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin

    2018-03-01

    We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.

  9. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local-global reference frames.

    PubMed

    Knierim, James J; Neunuebel, Joshua P; Deshmukh, Sachin S

    2014-02-05

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between 'where' versus 'what' needs revision. We propose a refinement of this model, which is more complex than the simple spatial-non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.

  11. Functional correlates of the lateral and medial entorhinal cortex: objects, path integration and local–global reference frames

    PubMed Central

    Knierim, James J.; Neunuebel, Joshua P.; Deshmukh, Sachin S.

    2014-01-01

    The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience. PMID:24366146

  12. Variable frequency microprocessor clock generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branson, C.N.

    A microprocessor-based system is described comprising: a digital central microprocessor provided with a clock input and having a rate of operation determined by the frequency of a clock signal input thereto; memory means operably coupled to the central microprocessor for storing programs respectively including a plurality of instructions and addressable by the central microprocessor; peripheral device operably connected to the central microprocessor, the first peripheral device being addressable by the central microprocessor for control thereby; a system clock generator for generating a digital reference clock signal having a reference frequency rate; and frequency rate reduction circuit means connected between themore » clock generator and the clock input of the central microprocessor for selectively dividing the reference clock signal to generate a microprocessor clock signal as an input to the central microprocessor for clocking the central microprocessor.« less

  13. Patient-specific pharmacokinetic parameter estimation on dynamic contrast-enhanced MRI of prostate: Preliminary evaluation of a novel AIF-free estimation method.

    PubMed

    Ginsburg, Shoshana B; Taimen, Pekka; Merisaari, Harri; Vainio, Paula; Boström, Peter J; Aronen, Hannu J; Jambor, Ivan; Madabhushi, Anant

    2016-12-01

    To develop and evaluate a prostate-based method (PBM) for estimating pharmacokinetic parameters on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) by leveraging inherent differences in pharmacokinetic characteristics between the peripheral zone (PZ) and transition zone (TZ). This retrospective study, approved by the Institutional Review Board, included 40 patients who underwent a multiparametric 3T MRI examination and subsequent radical prostatectomy. A two-step PBM for estimating pharmacokinetic parameters exploited the inherent differences in pharmacokinetic characteristics associated with the TZ and PZ. First, the reference region model was implemented to estimate ratios of K trans between normal TZ and PZ. Subsequently, the reference region model was leveraged again to estimate values for K trans and v e for every prostate voxel. The parameters of PBM were compared with those estimated using an arterial input function (AIF) derived from the femoral arteries. The ability of the parameters to differentiate prostate cancer (PCa) from benign tissue was evaluated on a voxel and lesion level. Additionally, the effect of temporal downsampling of the DCE MRI data was assessed. Significant differences (P < 0.05) in PBM K trans between PCa lesions and benign tissue were found in 26/27 patients with TZ lesions and in 33/38 patients with PZ lesions; significant differences in AIF-based K trans occurred in 26/27 and 30/38 patients, respectively. The 75 th and 100 th percentiles of K trans and v e estimated using PBM positively correlated with lesion size (P < 0.05). Pharmacokinetic parameters estimated via PBM outperformed AIF-based parameters in PCa detection. J. Magn. Reson. Imaging 2016;44:1405-1414. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Patient-Specific Pharmacokinetic Parameter Estimation on Dynamic Contrast-Enhanced MRI of Prostate: Preliminary Evaluation of a Novel AIF-Free Estimation Method

    PubMed Central

    Ginsburg, Shoshana B.; Taimen, Pekka; Merisaari, Harri; Vainio, Paula; Boström, Peter J.; Aronen, Hannu J.; Jambor, Ivan; Madabhushi, Anant

    2017-01-01

    Purpose To develop and evaluate a prostate-based method (PBM) for estimating pharmacokinetic parameters on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) by leveraging inherent differences in pharmacokinetic characteristics between the peripheral zone (PZ) and transition zone (TZ). Materials and Methods This retrospective study, approved by the Institutional Review Board, included 40 patients who underwent a multiparametric 3T MRI examination and subsequent radical prostatectomy. A two-step PBM for estimating pharmacokinetic parameters exploited the inherent differences in pharmacokinetic characteristics associated with the TZ and PZ. First, the reference region model was implemented to estimate ratios of Ktrans between normal TZ and PZ. Subsequently, the reference region model was leveraged again to estimate values for Ktrans and ve for every prostate voxel. The parameters of PBM were compared with those estimated using an arterial input function (AIF) derived from the femoral arteries. The ability of the parameters to differentiate prostate cancer (PCa) from benign tissue was evaluated on a voxel and lesion level. Additionally, the effect of temporal downsampling of the DCE MRI data was assessed. Results Significant differences (P < 0.05) in PBM Ktrans between PCa lesions and benign tissue were found in 26/27 patients with TZ lesions and in 33/38 patients with PZ lesions; significant differences in AIF-based Ktrans occurred in 26/27 and 30/38 patients, respectively. The 75th and 100th percentiles of Ktrans and ve estimated using PBM positively correlated with lesion size (P < 0.05). Conclusion Pharmacokinetic parameters estimated via PBM outperformed AIF-based parameters in PCa detection. PMID:27285161

  15. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  16. Track/train dynamics test report transfer function test. Volume 1: Test

    NASA Technical Reports Server (NTRS)

    Vigil, R. A.

    1975-01-01

    A description is presented of the transfer function test performed on an open hopper freight car loaded with 80 tons of coal. Test data and a post-test update of the requirements document and test procedure are presented. Included are a statement of the test objective, a description of the test configurations, test facilities, test methods, data acquisition/reduction operations, and a chronological test summary. An index to the data for the three test configurations (X, Y, and Z-axis tests) is presented along with test sequence, run number, test reference, and input parameters.

  17. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    USGS Publications Warehouse

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).

  18. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  19. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  20. KB3D Reference Manual. Version 1.a

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Siminiceanu, Radu; Carreno, Victor A.; Dowek, Gilles

    2005-01-01

    This paper is a reference manual describing the implementation of the KB3D conflict detection and resolution algorithm. The algorithm has been implemented in the Java and C++ programming languages. The reference manual gives a short overview of the detection and resolution functions, the structural implementation of the program, inputs and outputs to the program, and describes how the program is used. Inputs to the program can be rectangular coordinates or geodesic coordinates. The reference manual also gives examples of conflict scenarios and the resolution outputs the program produces.

  1. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.

  2. Hearing aid malfunction detection system

    NASA Technical Reports Server (NTRS)

    Kessinger, R. L. (Inventor)

    1977-01-01

    A malfunction detection system for detecting malfunctions in electrical signal processing circuits is disclosed. Malfunctions of a hearing aid in the form of frequency distortion and/or inadequate amplification by the hearing aid amplifier, as well as weakening of the hearing aid power supply are detectable. A test signal is generated and a timed switching circuit periodically applies the test signal to the input of the hearing aid amplifier in place of the input signal from the microphone. The resulting amplifier output is compared with the input test signal used as a reference signal. The hearing aid battery voltage is also periodically compared to a reference voltage. Deviations from the references beyond preset limits cause a warning system to operate.

  3. Wavelength meter having single mode fiber optics multiplexed inputs

    DOEpatents

    Hackel, R.P.; Paris, R.D.; Feldman, M.

    1993-02-23

    A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.

  4. Wavelength meter having single mode fiber optics multiplexed inputs

    DOEpatents

    Hackel, Richard P.; Paris, Robert D.; Feldman, Mark

    1993-01-01

    A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.

  5. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  6. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Astrophysics Data System (ADS)

    Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.

    2016-12-01

    The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.

  7. Insolation-oriented model of photovoltaic module using Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Huan-Liang

    2010-07-15

    This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less

  8. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  9. Modal Parameter Identification of a Flexible Arm System

    NASA Technical Reports Server (NTRS)

    Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard

    1998-01-01

    In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.

  10. Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development

    DTIC Science & Technology

    1986-10-01

    parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1

  11. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  12. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  13. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  14. TAILSIM Users Guide

    NASA Technical Reports Server (NTRS)

    Hiltner, Dale W.

    2000-01-01

    The TAILSIM program uses a 4th order Runge-Kutta method to integrate the standard aircraft equations-of-motion (EOM). The EOM determine three translational and three rotational accelerations about the aircraft's body axis reference system. The forces and moments that drive the EOM are determined from aerodynamic coefficients, dynamic derivatives, and control inputs. Values for these terms are determined from linear interpolation of tables that are a function of parameters such as angle-of-attack and surface deflections. Buildup equations combine these terms and dimensionalize them to generate the driving total forces and moments. Features that make TAILSIM applicable to studies of tailplane stall include modeling of the reversible control System, modeling of the pilot performing a load factor and/or airspeed command task, and modeling of vertical gusts. The reversible control system dynamics can be described as two hinged masses connected by a spring. resulting in a fifth order system. The pilot model is a standard form of lead-lag with a time delay applied to an integrated pitch rate and/or airspeed error feedback. The time delay is implemented by a Pade approximation, while the commanded pitch rate is determined by a commanded load factor. Vertical gust inputs include a single 1-cosine gust and a continuous NASA Dryden gust model. These dynamic models. coupled with the use of a nonlinear database, allow the tailplane stall characteristics, elevator response, and resulting aircraft response, to be modeled. A useful output capability of the TAILSIM program is the ability to display multiple post-run plot pages to allow a quick assessment of the time history response. There are 16 plot pages currently available to the user. Each plot page displays 9 parameters. Each parameter can also be displayed individually. on a one plot-per-page format. For a more refined display of the results the program can also create files of tabulated data. which can then be used by other plotting programs. The TAILSIM program was written straightforwardly assuming the user would want to change the database tables, the buildup equations, the output parameters. and the pilot model parameters. A separate database file and input file are automatically read in by the program. The use of an include file to set up all common blocks facilitates easy changing of parameter names and array sizes.

  15. A flatness-based control approach to drug infusion for cardiac function regulation

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Zervos, Nikolaos; Melkikh, Alexey

    2016-12-01

    A new control method based on differential flatness theory is developed in this article, aiming at solving the problem of regulation of haemodynamic parameters, Actually control of the cardiac output (volume of blood pumped out by heart per unit of time) and of the arterial blood pressure is achieved through the administered infusion of cardiovascular drugs, such as dopamine and sodium nitroprusside. Time delays between the control inputs and the system's outputs are taken into account. Using the principle of dynamic extension, which means that by considering certain control inputs and their derivatives as additional state variables, a state-space description for the heart's function is obtained. It is proven that the dynamic model of the heart is a differentially flat one. This enables its transformation into a linear canonical and decoupled form, for which the design of a stabilizing feedback controller becomes possible. The proposed feedback controller is of proven stability and assures fast and accurate tracking of the reference setpoints by the outputs of the heart's dynamic model. Moreover, by using a Kalman Filter-based disturbances' estimator, it becomes possible to estimate in real-time and compensate for the model uncertainty and external perturbation inputs that affect the heart's model.

  16. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  17. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  18. Mars Global Reference Atmospheric Model (Mars-GRAM 3.34): Programmer's Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie F.; Johnson, Dale L.

    1996-01-01

    This is a programmer's guide for the Mars Global Reference Atmospheric Model (Mars-GRAM 3.34). Included are a brief history and review of the model since its origin in 1988 and a technical discussion of recent additions and modifications. Examples of how to run both the interactive and batch (subroutine) forms are presented. Instructions are provided on how to customize output of the model for various parameters of the Mars atmosphere. Detailed descriptions are given of the main driver programs, subroutines, and associated computational methods. Lists and descriptions include input, output, and local variables in the programs. These descriptions give a summary of program steps and 'map' of calling relationships among the subroutines. Definitions are provided for the variables passed between subroutines through common lists. Explanations are provided for all diagnostic and progress messages generated during execution of the program. A brief outline of future plans for Mars-GRAM is also presented.

  19. Genetic Adaptive Control for PZT Actuators

    NASA Technical Reports Server (NTRS)

    Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.

    1995-01-01

    A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.

  20. Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities

    NASA Astrophysics Data System (ADS)

    Baylin-Stern, Adam C.

    This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.

  1. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface

    NASA Astrophysics Data System (ADS)

    Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.

    2016-12-01

    Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.

  3. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  4. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE PAGES

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...

    2017-09-23

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  5. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  6. Design framework for spherical microphone and loudspeaker arrays in a multiple-input multiple-output system.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus

    2017-03-01

    Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.

  7. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  8. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  9. A Data Model Framework for the Characterization of a Satellite Data Handling Software

    NASA Astrophysics Data System (ADS)

    Camatto, Gianluigi; Tipaldi, Massimo; Bothmer, Wolfgang; Ferraguto, Massimo; Bruenjes, Bernhard

    2014-08-01

    This paper describes an approach for the modelling of the characterization and configuration data yielded when developing a Satellite Data Handling Software (DHSW). The model can then be used as an input for the preparation of the logical and physical representation of the Satellite Reference Database (SRDB) contents and related SW suite, an essential product that allows transferring the information between the different system stakeholders, but also to produce part of the DHSW documentation and artefacts. Special attention is given to the shaping of the general Parameter concept, which is shared by a number of different entities within a Space System.

  10. Computer program documentation for the dynamic analysis of a noncontacting mechanical face seal

    NASA Technical Reports Server (NTRS)

    Auer, B. M.; Etsion, I.

    1980-01-01

    A computer program is presented which achieves a numerical solution for the equations of motion of a noncontacting mechanical face seal. The flexibly-mounted primary seal ring motion is expressed by a set of second order differential equations for three degrees of freedom. These equations are reduced to a set of first order equations and the GEAR software package is used to solve the set of first order equations. Program input includes seal design parameters and seal operating conditions. Output from the program includes velocities and displacements of the seal ring about the axis of an inertial reference system. One example problem is described.

  11. KALREF—A Kalman filter and time series approach to the International Terrestrial Reference Frame realization

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.

    2015-05-01

    The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.

  12. Xyce parallel electronic simulator : reference guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to runmore » on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.« less

  13. The effect of changes in space shuttle parameters on the NASA/MSFC multilayer diffusion model predictions of surface HCl concentrations

    NASA Technical Reports Server (NTRS)

    Glasser, M. E.; Rundel, R. D.

    1978-01-01

    A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.

  14. CD-HPF: New habitability score via data analytic modeling

    NASA Astrophysics Data System (ADS)

    Bora, K.; Saha, S.; Agrawal, S.; Safonova, M.; Routh, S.; Narasimhamurthy, A.

    2016-10-01

    The search for life on the planets outside the Solar System can be broadly classified into the following: looking for Earth-like conditions or the planets similar to the Earth (Earth similarity), and looking for the possibility of life in a form known or unknown to us (habitability). The two frequently used indices, Earth Similarity Index (ESI) and Planetary Habitability Index (PHI), describe heuristic methods to score habitability in the efforts to categorize different exoplanets (or exomoons). ESI, in particular, considers Earth as the reference frame for habitability, and is a quick screening tool to categorize and measure physical similarity of any planetary body with the Earth. The PHI assesses the potential habitability of any given planet, and is based on the essential requirements of known life: presence of a stable and protected substrate, energy, appropriate chemistry and a liquid medium. We propose here a different metric, a Cobb-Douglas Habitability Score (CDHS), based on Cobb-Douglas habitability production function (CD-HPF), which computes the habitability score by using measured and estimated planetary input parameters. As an initial set, we used radius, density, escape velocity and surface temperature of a planet. The values of the input parameters are normalized to the Earth Units (EU). The proposed metric, with exponents accounting for metric elasticity, is endowed with analytical properties that ensure global optima, and scales up to accommodate finitely many input parameters. The model is elastic, and, as we discovered, the standard PHI turns out to be a special case of the CDHS. Computed CDHS scores are fed to K-NN (K-Nearest Neighbor) classification algorithm with probabilistic herding that facilitates the assignment of exoplanets to appropriate classes via supervised feature learning methods, producing granular clusters of habitability. The proposed work describes a decision-theoretical model using the power of convex optimization and algorithmic machine learning.

  15. A method of 3D object recognition and localization in a cloud of points

    NASA Astrophysics Data System (ADS)

    Bielicki, Jerzy; Sitnik, Robert

    2013-12-01

    The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.

  16. Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.

    PubMed

    Pan, Yongping; Yu, Haoyong

    2017-06-01

    This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.

  17. Extension of the PC version of VEPFIT with input and output routines running under Windows

    NASA Astrophysics Data System (ADS)

    Schut, H.; van Veen, A.

    1995-01-01

    The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.

  18. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  19. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  20. Charge control microcomputer device for vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morishita, M.; Kouge, S.

    1986-08-26

    A charge control microcomputer device is described for a vehicle, comprising: an AC generator driven by an engine for generating an output current, the generator having armature coils and a field coil; a battery charged by a rectified output of the generator and generating a terminal voltage; a voltage regulator for controlling a current flowing in the field coil, to control an output voltage of the generator to a predetermined value; an engine controlling microcomputer for receiving engine parameter data from the engine, to control the operation of the engine; a charge control microcomputer for processing input data including datamore » on at least one engine parameter output from the engine controlling microcomputer, and charge system data including at least one of battery terminal voltage data, generator voltage data and generator output current data, to provide a reference voltage for the voltage regulator.« less

  1. Controller design for a class of nonlinear MIMO coupled system using multiple models and second level adaptation.

    PubMed

    Pandey, Vinay Kumar; Kar, Indrani; Mahanta, Chitralekha

    2017-07-01

    In this paper, an adaptive control method using multiple models with second level adaptation is proposed for a class of nonlinear multi-input multi-output (MIMO) coupled systems. Multiple estimation models are used to tune the unknown parameters at the first level. The second level adaptation provides a single parameter vector for the controller. A feedback linearization technique is used to design a state feedback control. The efficacy of the designed controller is validated by conducting real time experiment on a laboratory setup of twin rotor MIMO system (TRMS). The TRMS setup is discussed in detail and the experiments were performed for regulation and tracking problem for pitch and yaw control using different reference signals. An Extended Kalman Filter (EKF) has been used to observe the unavailable states of the TRMS. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Six-hourly time series of horizontal troposphere gradients in VLBI analyis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes

    2016-04-01

    Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.

  3. Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.

    PubMed

    Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark

    2016-04-01

    Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. Predicting response before initiation of neoadjuvant chemotherapy in breast cancer using new methods for the analysis of dynamic contrast enhanced MRI (DCE MRI) data

    NASA Astrophysics Data System (ADS)

    DeGrandchamp, Joseph B.; Whisenant, Jennifer G.; Arlinghaus, Lori R.; Abramson, V. G.; Yankeelov, Thomas E.; Cárdenas-Rodríguez, Julio

    2016-03-01

    The pharmacokinetic parameters derived from dynamic contrast enhanced (DCE) MRI have shown promise as biomarkers for tumor response to therapy. However, standard methods of analyzing DCE MRI data (Tofts model) require high temporal resolution, high signal-to-noise ratio (SNR), and the Arterial Input Function (AIF). Such models produce reliable biomarkers of response only when a therapy has a large effect on the parameters. We recently reported a method that solves the limitations, the Linear Reference Region Model (LRRM). Similar to other reference region models, the LRRM needs no AIF. Additionally, the LRRM is more accurate and precise than standard methods at low SNR and slow temporal resolution, suggesting LRRM-derived biomarkers could be better predictors. Here, the LRRM, Non-linear Reference Region Model (NRRM), Linear Tofts model (LTM), and Non-linear Tofts Model (NLTM) were used to estimate the RKtrans between muscle and tumor (or the Ktrans for Tofts) and the tumor kep,TOI for 39 breast cancer patients who received neoadjuvant chemotherapy (NAC). These parameters and the receptor statuses of each patient were used to construct cross-validated predictive models to classify patients as complete pathological responders (pCR) or non-complete pathological responders (non-pCR) to NAC. Model performance was evaluated using area under the ROC curve (AUC). The AUC for receptor status alone was 0.62, while the best performance using predictors from the LRRM, NRRM, LTM, and NLTM were AUCs of 0.79, 0.55, 0.60, and 0.59 respectively. This suggests that the LRRM can be used to predict response to NAC in breast cancer.

  5. Consistent realization of Celestial and Terrestrial Reference Frames

    NASA Astrophysics Data System (ADS)

    Kwak, Younghee; Bloßfeld, Mathis; Schmid, Ralf; Angermann, Detlef; Gerstl, Michael; Seitz, Manuela

    2018-03-01

    The Celestial Reference System (CRS) is currently realized only by Very Long Baseline Interferometry (VLBI) because it is the space geodetic technique that enables observations in that frame. In contrast, the Terrestrial Reference System (TRS) is realized by means of the combination of four space geodetic techniques: Global Navigation Satellite System (GNSS), VLBI, Satellite Laser Ranging (SLR), and Doppler Orbitography and Radiopositioning Integrated by Satellite. The Earth orientation parameters (EOP) are the link between the two types of systems, CRS and TRS. The EOP series of the International Earth Rotation and Reference Systems Service were combined of specifically selected series from various analysis centers. Other EOP series were generated by a simultaneous estimation together with the TRF while the CRF was fixed. Those computation approaches entail inherent inconsistencies between TRF, EOP, and CRF, also because the input data sets are different. A combined normal equation (NEQ) system, which consists of all the parameters, i.e., TRF, EOP, and CRF, would overcome such an inconsistency. In this paper, we simultaneously estimate TRF, EOP, and CRF from an inter-technique combined NEQ using the latest GNSS, VLBI, and SLR data (2005-2015). The results show that the selection of local ties is most critical to the TRF. The combination of pole coordinates is beneficial for the CRF, whereas the combination of Δ UT1 results in clear rotations of the estimated CRF. However, the standard deviations of the EOP and the CRF improve by the inter-technique combination which indicates the benefits of a common estimation of all parameters. It became evident that the common determination of TRF, EOP, and CRF systematically influences future ICRF computations at the level of several μas. Moreover, the CRF is influenced by up to 50 μas if the station coordinates and EOP are dominated by the satellite techniques.

  6. Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook

    DTIC Science & Technology

    1980-04-01

    82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance

  7. High stability wavefront reference source

    DOEpatents

    Feldman, M.; Mockler, D.J.

    1994-05-03

    A thermally and mechanically stable wavefront reference source which produces a collimated output laser beam is disclosed. The output beam comprises substantially planar reference wavefronts which are useful for aligning and testing optical interferometers. The invention receives coherent radiation from an input optical fiber, directs a diverging input beam of the coherent radiation to a beam folding mirror (to produce a reflected diverging beam), and collimates the reflected diverging beam using a collimating lens. In a class of preferred embodiments, the invention includes a thermally and mechanically stable frame comprising rod members connected between a front end plate and a back end plate. The beam folding mirror is mounted on the back end plate, and the collimating lens mounted to the rods between the end plates. The end plates and rods are preferably made of thermally stable metal alloy. Preferably, the input optical fiber is a single mode fiber coupled to an input end of a second single mode optical fiber that is wound around a mandrel fixedly attached to the frame of the apparatus. The output end of the second fiber is cleaved so as to be optically flat, so that the input beam emerging therefrom is a nearly perfect diverging spherical wave. 7 figures.

  8. High stability wavefront reference source

    DOEpatents

    Feldman, Mark; Mockler, Daniel J.

    1994-01-01

    A thermally and mechanically stable wavefront reference source which produces a collimated output laser beam. The output beam comprises substantially planar reference wavefronts which are useful for aligning and testing optical interferometers. The invention receives coherent radiation from an input optical fiber, directs a diverging input beam of the coherent radiation to a beam folding mirror (to produce a reflected diverging beam), and collimates the reflected diverging beam using a collimating lens. In a class of preferred embodiments, the invention includes a thermally and mechanically stable frame comprising rod members connected between a front end plate and a back end plate. The beam folding mirror is mounted on the back end plate, and the collimating lens mounted to the rods between the end plates. The end plates and rods are preferably made of thermally stable metal alloy. Preferably, the input optical fiber is a single mode fiber coupled to an input end of a second single mode optical fiber that is wound around a mandrel fixedly attached to the frame of the apparatus. The output end of the second fiber is cleaved so as to be optically flat, so that the input beam emerging therefrom is a nearly perfect diverging spherical wave.

  9. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  10. Real-time edge-enhanced optical correlator

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)

    1992-01-01

    Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.

  11. Climate change effects on extreme flows of water supply area in Istanbul: utility of regional climate models and downscaling method.

    PubMed

    Kara, Fatih; Yucel, Ismail

    2015-09-01

    This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.

  12. Correlated uncertainties in Monte Carlo reaction rate calculations

    NASA Astrophysics Data System (ADS)

    Longland, Richard

    2017-07-01

    Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.

  13. SCIENCE PARAMETRICS FOR MISSIONS TO SEARCH FOR EARTH-LIKE EXOPLANETS BY DIRECT IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Robert A., E-mail: rbrown@stsci.edu

    2015-01-20

    We use N{sub t} , the number of exoplanets observed in time t, as a science metric to study direct-search missions like Terrestrial Planet Finder. In our model, N has 27 parameters, divided into three categories: 2 astronomical, 7 instrumental, and 18 science-operational. For various ''27-vectors'' of those parameters chosen to explore parameter space, we compute design reference missions to estimate N{sub t} . Our treatment includes the recovery of completeness c after a search observation, for revisits, solar and antisolar avoidance, observational overhead, and follow-on spectroscopy. Our baseline 27-vector has aperture D = 16 m, inner working angle IWAmore » = 0.039'', mission time t = 0-5 yr, occurrence probability for Earth-like exoplanets η = 0.2, and typical values for the remaining 23 parameters. For the baseline case, a typical five-year design reference mission has an input catalog of ∼4700 stars with nonzero completeness, ∼1300 unique stars observed in ∼2600 observations, of which ∼1300 are revisits, and it produces N {sub 1} ∼ 50 exoplanets after one year and N {sub 5} ∼ 130 after five years. We explore offsets from the baseline for 10 parameters. We find that N depends strongly on IWA and only weakly on D. It also depends only weakly on zodiacal light for Z < 50 zodis, end-to-end efficiency for h > 0.2, and scattered starlight for ζ < 10{sup –10}. We find that observational overheads, completeness recovery and revisits, solar and antisolar avoidance, and follow-on spectroscopy are all important factors in estimating N.« less

  14. Suggestions for CAP-TSD mesh and time-step input parameters

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1991-01-01

    Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.

  15. Wide-temperature integrated operational amplifier

    NASA Technical Reports Server (NTRS)

    Mojarradi, Mohammad (Inventor); Levanas, Greg (Inventor); Chen, Yuan (Inventor); Cozy, Raymond S. (Inventor); Greenwell, Robert (Inventor); Terry, Stephen (Inventor); Blalock, Benjamin J. (Inventor)

    2009-01-01

    The present invention relates to a reference current circuit. The reference circuit comprises a low-level current bias circuit, a voltage proportional-to-absolute temperature generator for creating a proportional-to-absolute temperature voltage (VPTAT), and a MOSFET-based constant-IC regulator circuit. The MOSFET-based constant-IC regulator circuit includes a constant-IC input and constant-IC output. The constant-IC input is electrically connected with the VPTAT generator such that the voltage proportional-to-absolute temperature is the input into the constant-IC regulator circuit. Thus the constant-IC output maintains the constant-IC ratio across any temperature range.

  16. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris

    This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less

  18. Spatial and temporal variability of reference evapotranspiration and influenced meteorological factors in the Jialing River Basin, China

    NASA Astrophysics Data System (ADS)

    Herath, Imali Kaushalya; Ye, Xuchun; Wang, Jianli; Bouraima, Abdel-Kabirou

    2018-02-01

    Reference evapotranspiration (ETr) is one of the important parameters in the hydrological cycle. The spatio-temporal variation of ETr and other meteorological parameters that influence ETr were investigated in the Jialing River Basin (JRB), China. The ETr was estimated using the CROPWAT 8.0 computer model based on the Penman-Montieth equation for the period 1964-2014. Mean temperature (MT), relative humidity (RH), sunshine duration (SD), and wind speed (WS) were the main input parameters of CROPWAT while 12 meteorological stations were evaluated. Linear regression and Mann-Kendall methods were applied to study the spatio-temporal trends while the inverse distance weighted (IDW) method was used to identify the spatial distribution of ETr. Stepwise regression and partial correlation methods were used to identify the meteorological variables that most significantly influenced the changes in ETr. The highest annual ETr was found in the northern part of the basin, whereas the lowest rate was recorded in the western part. In the autumn, the highest ETr was recorded in the southeast part of JRB. The annual ETr reflected neither significant increasing nor decreasing trends. Except for the summer, ETr is slightly increasing in other seasons. The MT significantly increased whereas SD and RH were significantly decreased during the 50-year period. Partial correlation and stepwise regression methods found that the impact of meteorological parameters on ETr varies on an annual and seasonal basis while SD, MT, and RH contributed to the changes of annual and seasonal ETr in the JRB.

  19. Numerical assessment of the performance of a scalp-implantable antenna: effects of head anatomy and dielectric parameters.

    PubMed

    Kiourti, Asimina; Nikita, Konstantina S

    2013-04-01

    We numerically assess the effects of head properties (anatomy and dielectric parameters) on the performance of a scalp-implantable antenna for telemetry in the Medical Implant Communications Service band (402.0-405.0 MHz). Safety issues and performance (resonance, radiation) are analyzed for an experimentally validated implantable antenna (volume of 203.6 mm(3) ), considering five head models (3- and 5-layer spherical, 6-, 10-, and 13-tissue anatomical) and seven scenarios (variations ± 20% in the reference permittivity and conductivity values). Simulations are carried out at 403.5 MHz using the finite-difference time-domain method. Anatomy of the head model around the implantation site is found to mainly affect antenna performance, whereas overall tissue anatomy and dielectric parameters are less significant. Compared to the reference dielectric parameter scenario within the 3-layer spherical head, maximum variations of -19.9%, +3.7%, -55.1%, and -39.2% are computed in the maximum allowable net input power imposed by the IEEE Std C95.1-1999 and Std C95.1-2005 safety guidelines, return loss, and maximum far-field gain, respectively. Compliance with the recent IEEE Std C95.1-2005 is found to be almost insensitive to head properties, in contrast with IEEE Std C95.1-1999. Taking tissue property uncertainties into account is highlighted as crucial for implantable antenna design and performance assessment. Bioelectromagnetics 34:167-179, 2013. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.

  20. Dynamic Range and Sensitivity Requirements of Satellite Ocean Color Sensors: Learning from the Past

    NASA Technical Reports Server (NTRS)

    Hu, Chuanmin; Feng, Lian; Lee, Zhongping; Davis, Curtiss O.; Mannino, Antonio; McClain, Charles R.; Franz, Bryan A.

    2012-01-01

    Sensor design and mission planning for satellite ocean color measurements requires careful consideration of the signal dynamic range and sensitivity (specifically here signal-to-noise ratio or SNR) so that small changes of ocean properties (e.g., surface chlorophyll-a concentrations or Chl) can be quantified while most measurements are not saturated. Past and current sensors used different signal levels, formats, and conventions to specify these critical parameters, making it difficult to make cross-sensor comparisons or to establish standards for future sensor design. The goal of this study is to quantify these parameters under uniform conditions for widely used past and current sensors in order to provide a reference for the design of future ocean color radiometers. Using measurements from the Moderate Resolution Imaging Spectroradiometer onboard the Aqua satellite (MODISA) under various solar zenith angles (SZAs), typical (L(sub typical)) and maximum (L(sub max)) at-sensor radiances from the visible to the shortwave IR were determined. The Ltypical values at an SZA of 45 deg were used as constraints to calculate SNRs of 10 multiband sensors at the same L(sub typical) radiance input and 2 hyperspectral sensors at a similar radiance input. The calculations were based on clear-water scenes with an objective method of selecting pixels with minimal cross-pixel variations to assure target homogeneity. Among the widely used ocean color sensors that have routine global coverage, MODISA ocean bands (1 km) showed 2-4 times higher SNRs than the Sea-viewing Wide Field-of-view Sensor (Sea-WiFS) (1 km) and comparable SNRs to the Medium Resolution Imaging Spectrometer (MERIS)-RR (reduced resolution, 1.2 km), leading to different levels of precision in the retrieved Chl data product. MERIS-FR (full resolution, 300 m) showed SNRs lower than MODISA and MERIS-RR with the gain in spatial resolution. SNRs of all MODISA ocean bands and SeaWiFS bands (except the SeaWiFS near-IR bands) exceeded those from prelaunch sensor specifications after adjusting the input radiance to L(sub typical). The tabulated L(sub typical), L(sub max), and SNRs of the various multiband and hyperspectral sensors under the same or similar radiance input provide references to compare sensor performance in product precision and to help design future missions such as the Geostationary Coastal and Air Pollution Events (GEO-CAPE) mission and the Pre-Aerosol-Clouds-Ecosystems (PACE) mission currently being planned by the U.S. National Aeronautics and Space Administration (NASA).

  1. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  2. Robust predictive control with optimal load tracking for critical applications. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tse, J.; Bentsman, J.; Miller, N.

    1994-09-01

    This report derives a multi-input multi-output (MIMO) version of a two-degree-of-freedom receding-horizon control law based on mixed H{sub 2}/H{infinity} minimization. First, the integrand in the frequency domain representation of the MIMO performance criterion is decomposed into disturbance and reference spectra. Then the controller is derived which minimizes the peak of the disturbance spectrum and the integral of the reference spectrum on the unit circle. The resulting two-degree-of-freedom MIMO control strategy, referred to as the minimax predictive multivariable control (MPC), is shown to have worst-case-disturbance-rejection and robust-stability properties superior to those of purely H{sub 2}-optimal controllers, such as Generalized Predictive Controlmore » (GPC), for identical horizons. An attractive feature of the receding horizon structure of MPC is that it can, in ways similar to GPC, directly incorporate input constraints and pre-programmed reference inputs, which are nontrivial tasks in the standard H{infinity} design.« less

  3. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  4. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.

  5. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  6. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of IMX 101 Components

    DTIC Science & Technology

    2017-05-01

    ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity

  7. Piloted Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.

  8. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  9. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  10. Automated Knowledge Discovery From Simulators

    NASA Technical Reports Server (NTRS)

    Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William

    2007-01-01

    A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.

  11. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for other crops. The sensitivity to the reference evapotranspiration is highly variable with the considered crop and ranges from positive values (for soybean), to negative values (for rice and maize) and near-zero values for wheat. This variability reflects the different yield response factors of crops, which expresses their tolerance to water stress.

  12. Structured perceptual input imposes an egocentric frame of reference-pointing, imagery, and spatial self-consciousness.

    PubMed

    Marcel, Anthony; Dobel, Christian

    2005-01-01

    Perceptual input imposes and maintains an egocentric frame of reference, which enables orientation. When blindfolded, people tended to mistake the assumed intrinsic axes of symmetry of their immediate environment (a room) for their own egocentric relation to features of the room. When asked to point to the door and window, known to be at mid-points of facing (or adjacent) walls, they pointed with their arms at 180 degrees (or 90 degrees) angles, irrespective of where they thought they were in the room. People did the same when requested to imagine the situation. They justified their responses (inappropriately) by logical necessity or a structural description of the room rather than (appropriately) by relative location of themselves and the reference points. In eight experiments, we explored the effect on this in perception and imagery of: perceptual input (without perceptibility of the target reference points); imaging oneself versus another person; aids to explicit spatial self-consciousness; order of questions about self-location; and the relation of targets to the axes of symmetry of the room. The results indicate that, if one is deprived of structured perceptual input, as well as losing one's bearings, (a) one is likely to lose one's egocentric frame of reference itself, and (b) instead of pointing to reference points, one demonstrates their structural relation by adopting the intrinsic axes of the environment as one's own. This is prevented by providing noninformative perceptual input or by inducing subjects to imagine themselves from the outside, which makes explicit the fact of their being located relative to the world. The role of perceptual contact with a structured world is discussed in relation to sensory deprivation and imagery, appeal is made to Gibson's theory of joint egoreception and exteroception, and the data are related to recent theories of spatial memory and navigation.

  13. An integrated pan-tropical biomass map using multiple reference datasets.

    PubMed

    Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon

    2016-04-01

    We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. © 2015 John Wiley & Sons Ltd.

  14. INDES User's guide multistep input design with nonlinear rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.

  15. Incorporating uncertainty in RADTRAN 6.0 input files.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John

    Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less

  16. Input that Contradicts Young Children's Strategy for Mapping Novel Words Affects Their Phonological and Semantic Interpretation of Other Novel Words.

    ERIC Educational Resources Information Center

    Jarvis, Lorna Hernandez; Merriman, William E.; Barnett, Michelle; Hanba, Jessica; Van Haitsma, Kylee S.

    2004-01-01

    Children tend to choose an entity they cannot already label, rather than one they can, as the likely referent of a novel noun. The effect of input that contradicts this strategy on the interpretation of other novel nouns was investigated. In pre- and posttests, 4-year-olds were asked to judge whether novel nouns referred to "name-similar" familiar…

  17. MIA-Clustering: a novel method for segmentation of paleontological material.

    PubMed

    Dunmore, Christopher J; Wollny, Gert; Skinner, Matthew M

    2018-01-01

    Paleontological research increasingly uses high-resolution micro-computed tomography (μCT) to study the inner architecture of modern and fossil bone material to answer important questions regarding vertebrate evolution. This non-destructive method allows for the measurement of otherwise inaccessible morphology. Digital measurement is predicated on the accurate segmentation of modern or fossilized bone from other structures imaged in μCT scans, as errors in segmentation can result in inaccurate calculations of structural parameters. Several approaches to image segmentation have been proposed with varying degrees of automation, ranging from completely manual segmentation, to the selection of input parameters required for computational algorithms. Many of these segmentation algorithms provide speed and reproducibility at the cost of flexibility that manual segmentation provides. In particular, the segmentation of modern and fossil bone in the presence of materials such as desiccated soft tissue, soil matrix or precipitated crystalline material can be difficult. Here we present a free open-source segmentation algorithm application capable of segmenting modern and fossil bone, which also reduces subjective user decisions to a minimum. We compare the effectiveness of this algorithm with another leading method by using both to measure the parameters of a known dimension reference object, as well as to segment an example problematic fossil scan. The results demonstrate that the medical image analysis-clustering method produces accurate segmentations and offers more flexibility than those of equivalent precision. Its free availability, flexibility to deal with non-bone inclusions and limited need for user input give it broad applicability in anthropological, anatomical, and paleontological contexts.

  18. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  19. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. The IVS data input to ITRF2014

    NASA Astrophysics Data System (ADS)

    Nothnagel, Axel; Alef, Walter; Amagai, Jun; Andersen, Per Helge; Andreeva, Tatiana; Artz, Thomas; Bachmann, Sabine; Barache, Christophe; Baudry, Alain; Bauernfeind, Erhard; Baver, Karen; Beaudoin, Christopher; Behrend, Dirk; Bellanger, Antoine; Berdnikov, Anton; Bergman, Per; Bernhart, Simone; Bertarini, Alessandra; Bianco, Giuseppe; Bielmaier, Ewald; Boboltz, David; Böhm, Johannes; Böhm, Sigrid; Boer, Armin; Bolotin, Sergei; Bougeard, Mireille; Bourda, Geraldine; Buttaccio, Salvo; Cannizzaro, Letizia; Cappallo, Roger; Carlson, Brent; Carter, Merri Sue; Charlot, Patrick; Chen, Chenyu; Chen, Maozheng; Cho, Jungho; Clark, Thomas; Collioud, Arnaud; Colomer, Francisco; Colucci, Giuseppe; Combrinck, Ludwig; Conway, John; Corey, Brian; Curtis, Ronald; Dassing, Reiner; Davis, Maria; de-Vicente, Pablo; De Witt, Aletha; Diakov, Alexey; Dickey, John; Diegel, Irv; Doi, Koichiro; Drewes, Hermann; Dube, Maurice; Elgered, Gunnar; Engelhardt, Gerald; Evangelista, Mark; Fan, Qingyuan; Fedotov, Leonid; Fey, Alan; Figueroa, Ricardo; Fukuzaki, Yoshihiro; Gambis, Daniel; Garcia-Espada, Susana; Gaume, Ralph; Gaylard, Michael; Geiger, Nicole; Gipson, John; Gomez, Frank; Gomez-Gonzalez, Jesus; Gordon, David; Govind, Ramesh; Gubanov, Vadim; Gulyaev, Sergei; Haas, Ruediger; Hall, David; Halsig, Sebastian; Hammargren, Roger; Hase, Hayo; Heinkelmann, Robert; Helldner, Leif; Herrera, Cristian; Himwich, Ed; Hobiger, Thomas; Holst, Christoph; Hong, Xiaoyu; Honma, Mareki; Huang, Xinyong; Hugentobler, Urs; Ichikawa, Ryuichi; Iddink, Andreas; Ihde, Johannes; Ilijin, Gennadiy; Ipatov, Alexander; Ipatova, Irina; Ishihara, Misao; Ivanov, D. V.; Jacobs, Chris; Jike, Takaaki; Johansson, Karl-Ake; Johnson, Heidi; Johnston, Kenneth; Ju, Hyunhee; Karasawa, Masao; Kaufmann, Pierre; Kawabata, Ryoji; Kawaguchi, Noriyuki; Kawai, Eiji; Kaydanovsky, Michael; Kharinov, Mikhail; Kobayashi, Hideyuki; Kokado, Kensuke; Kondo, Tetsuro; Korkin, Edward; Koyama, Yasuhiro; Krasna, Hana; Kronschnabl, Gerhard; Kurdubov, Sergey; Kurihara, Shinobu; Kuroda, Jiro; Kwak, Younghee; La Porta, Laura; Labelle, Ruth; Lamb, Doug; Lambert, Sébastien; Langkaas, Line; Lanotte, Roberto; Lavrov, Alexey; Le Bail, Karine; Leek, Judith; Li, Bing; Li, Huihua; Li, Jinling; Liang, Shiguang; Lindqvist, Michael; Liu, Xiang; Loesler, Michael; Long, Jim; Lonsdale, Colin; Lovell, Jim; Lowe, Stephen; Lucena, Antonio; Luzum, Brian; Ma, Chopo; Ma, Jun; Maccaferri, Giuseppe; Machida, Morito; MacMillan, Dan; Madzak, Matthias; Malkin, Zinovy; Manabe, Seiji; Mantovani, Franco; Mardyshkin, Vyacheslav; Marshalov, Dmitry; Mathiassen, Geir; Matsuzaka, Shigeru; McCarthy, Dennis; Melnikov, Alexey; Michailov, Andrey; Miller, Natalia; Mitchell, Donald; Mora-Diaz, Julian Andres; Mueskens, Arno; Mukai, Yasuko; Nanni, Mauro; Natusch, Tim; Negusini, Monia; Neidhardt, Alexander; Nickola, Marisa; Nicolson, George; Niell, Arthur; Nikitin, Pavel; Nilsson, Tobias; Ning, Tong; Nishikawa, Takashi; Noll, Carey; Nozawa, Kentarou; Ogaja, Clement; Oh, Hongjong; Olofsson, Hans; Opseth, Per Erik; Orfei, Sandro; Pacione, Rosa; Pazamickas, Katherine; Petrachenko, William; Pettersson, Lars; Pino, Pedro; Plank, Lucia; Ploetz, Christian; Poirier, Michael; Poutanen, Markku; Qian, Zhihan; Quick, Jonathan; Rahimov, Ismail; Redmond, Jay; Reid, Brett; Reynolds, John; Richter, Bernd; Rioja, Maria; Romero-Wolf, Andres; Ruszczyk, Chester; Salnikov, Alexander; Sarti, Pierguido; Schatz, Raimund; Scherneck, Hans-Georg; Schiavone, Francesco; Schreiber, Ulrich; Schuh, Harald; Schwarz, Walter; Sciarretta, Cecilia; Searle, Anthony; Sekido, Mamoru; Seitz, Manuela; Shao, Minghui; Shibuya, Kazuo; Shu, Fengchun; Sieber, Moritz; Skjaeveland, Asmund; Skurikhina, Elena; Smolentsev, Sergey; Smythe, Dan; Sousa, Don; Sovers, Ojars; Stanford, Laura; Stanghellini, Carlo; Steppe, Alan; Strand, Rich; Sun, Jing; Surkis, Igor; Takashima, Kazuhiro; Takefuji, Kazuhiro; Takiguchi, Hiroshi; Tamura, Yoshiaki; Tanabe, Tadashi; Tanir, Emine; Tao, An; Tateyama, Claudio; Teke, Kamil; Thomas, Cynthia; Thorandt, Volkmar; Thornton, Bruce; Tierno Ros, Claudia; Titov, Oleg; Titus, Mike; Tomasi, Paolo; Tornatore, Vincenza; Trigilio, Corrado; Trofimov, Dmitriy; Tsutsumi, Masanori; Tuccari, Gino; Tzioumis, Tasso; Ujihara, Hideki; Ullrich, Dieter; Uunila, Minttu; Venturi, Tiziana; Vespe, Francesco; Vityazev, Veniamin; Volvach, Alexandr; Vytnov, Alexander; Wang, Guangli; Wang, Jinqing; Wang, Lingling; Wang, Na; Wang, Shiqiang; Wei, Wenren; Weston, Stuart; Whitney, Alan; Wojdziak, Reiner; Yatskiv, Yaroslav; Yang, Wenjun; Ye, Shuhua; Yi, Sangoh; Yusup, Aili; Zapata, Octavio; Zeitlhoefler, Reinhard; Zhang, Hua; Zhang, Ming; Zhang, Xiuzhong; Zhao, Rongbing; Zheng, Weimin; Zhou, Ruixian; Zubko, Nataliya

    2015-01-01

    Very Long Baseline Interferometry (VLBI) is a primary space-geodetic technique for determining precise coordinates on the Earth, for monitoring the variable Earth rotation and orientation with highest precision, and for deriving many other parameters of the Earth system. The International VLBI Service for Geodesy and Astrometry (IVS, http://ivscc.gsfc.nasa.gov/) is a service of the International Association of Geodesy (IAG) and the International Astronomical Union (IAU). The datasets published here are the results of individual Very Long Baseline Interferometry (VLBI) sessions in the form of normal equations in SINEX 2.0 format (http://www.iers.org/IERS/EN/Organization/AnalysisCoordinator/SinexFormat/sinex.html, the SINEX 2.0 description is attached as pdf) provided by IVS as the input for the next release of the International Terrestrial Reference System (ITRF): ITRF2014. This is a new version of the ITRF2008 release (Bockmann et al., 2009). For each session/ file, the normal equation systems contain elements for the coordinate components of all stations having participated in the respective session as well as for the Earth orientation parameters (x-pole, y-pole, UT1 and its time derivatives plus offset to the IAU2006 precession-nutation components dX, dY (https://www.iau.org/static/resolutions/IAU2006_Resol1.pdf). The terrestrial part is free of datum. The data sets are the result of a weighted combination of the input of several IVS Analysis Centers. The IVS contribution for ITRF2014 is described in Bachmann et al (2015), Schuh and Behrend (2012) provide a general overview on the VLBI method, details on the internal data handling can be found at Behrend (2013).

  1. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal.

    PubMed

    Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.

  2. Achromatic self-referencing interferometer

    DOEpatents

    Feldman, Mark

    1994-01-01

    A self-referencing Mach-Zehnder interferometer for accurately measuring laser wavefronts over a broad wavelength range (for example, 600 nm to 900 nm). The apparatus directs a reference portion of an input beam to a reference arm and a measurement portion of the input beam to a measurement arm, recombines the output beams from the reference and measurement arms, and registers the resulting interference pattern ("first" interferogram) at a first detector. Optionally, subportions of the measurement portion are diverted to second and third detectors, which respectively register intensity and interferogram signals which can be processed to reduce the first interferogram's sensitivity to input noise. The reference arm includes a spatial filter producing a high quality spherical beam from the reference portion, a tilted wedge plate compensating for off-axis aberrations in the spatial filter output, and mirror collimating the radiation transmitted through the tilted wedge plate. The apparatus includes a thermally and mechanically stable baseplate which supports all reference arm optics, or at least the spatial filter, tilted wedge plate, and the collimator. The tilted wedge plate is mounted adjustably with respect to the spatial filter and collimator, so that it can be maintained in an orientation in which it does not introduce significant wave front errors into the beam propagating through the reference arm. The apparatus is polarization insensitive and has an equal path length configuration enabling measurement of radiation from broadband as well as closely spaced laser line sources.

  3. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    NASA Astrophysics Data System (ADS)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  4. Classification of video sequences into chosen generalized use classes of target size and lighting level.

    PubMed

    Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin

    The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.

  5. Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.

    PubMed

    Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva

    2011-06-01

    The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.

  6. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  7. High-Q resonant cavities for terahertz quantum cascade lasers.

    PubMed

    Campa, A; Consolino, L; Ravaro, M; Mazzotti, D; Vitiello, M S; Bartalini, S; De Natale, P

    2015-02-09

    We report on the realization and characterization of two different designs for resonant THz cavities, based on wire-grid polarizers as input/output couplers, and injected by a continuous-wave quantum cascade laser (QCL) emitting at 2.55 THz. A comparison between the measured resonators parameters and the expected theoretical values is reported. With achieved quality factor Q ≈ 2.5 × 10(5), these cavities show resonant peaks as narrow as few MHz, comparable with the typical Doppler linewidth of THz molecular transitions and slightly broader than the free-running QCL emission spectrum. The effects of the optical feedback from one cavity to the QCL are examined by using the other cavity as a frequency reference.

  8. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  9. The dynamics of integrate-and-fire: mean versus variance modulations and dependence on baseline parameters.

    PubMed

    Pressley, Joanna; Troyer, Todd W

    2011-05-01

    The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.

  10. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  11. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    NASA Astrophysics Data System (ADS)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  12. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  13. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  14. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  15. Method and apparatus for large motor control

    DOEpatents

    Rose, Chris R [Santa Fe, NM; Nelson, Ronald O [White Rock, NM

    2003-08-12

    Apparatus and method for providing digital signal processing method for controlling the speed and phase of a motor involves inputting a reference signal having a frequency and relative phase indicative of a time based signal; modifying the reference signal to introduce a slew-rate limited portion of each cycle of the reference signal; inputting a feedback signal having a frequency and relative phase indicative of the operation of said motor; modifying the feedback signal to introduce a slew-rate limited portion of each cycle of the feedback signal; analyzing the modified reference signal and the modified feedback signal to determine the frequency of the modified reference signal and of the modified feedback signal and said relative phase between said modified reference signal and said modified feedback signal; and outputting control signals to the motor for adjusting said speed and phase of the motor based on the frequency determination and determination of the relative phase.

  16. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  17. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  18. MTPA control of mechanical sensorless IPMSM based on adaptive nonlinear control.

    PubMed

    Najjar-Khodabakhsh, Abbas; Soltani, Jafar

    2016-03-01

    In this paper, an adaptive nonlinear control scheme has been proposed for implementing maximum torque per ampere (MTPA) control strategy corresponding to interior permanent magnet synchronous motor (IPMSM) drive. This control scheme is developed in the rotor d-q axis reference frame using adaptive input-output state feedback linearization (AIOFL) method. The drive system control stability is supported by Lyapunov theory. The motor inductances are online estimated by an estimation law obtained by AIOFL. The estimation errors of these parameters are proved to be asymptotically converged to zero. Based on minimizing the motor current amplitude, the MTPA control strategy is performed by using the nonlinear optimization technique while considering the online reference torque. The motor reference torque is generated by a conventional rotor speed PI controller. By performing MTPA control strategy, the generated online motor d-q reference currents were used in AIOFL controller to obtain the SV-PWM reference voltages and the online estimation of the motor d-q inductances. In addition, the stator resistance is online estimated using a conventional PI controller. Moreover, the rotor position is detected using the online estimation of the stator flux and online estimation of the motor q-axis inductance. Simulation and experimental results obtained prove the effectiveness and the capability of the proposed control method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Automatic control of the NMB level in general anaesthesia with a switching total system mass control strategy.

    PubMed

    Teixeira, Miguel; Mendonça, Teresa; Rocha, Paula; Rabiço, Rui

    2014-12-01

    This paper presents a model based switching control strategy to drive the neuromuscular blockade (NMB) level of patients undergoing general anesthesia to a predefined reference. A single-input single-output Wiener system with only two parameters is used to model the effect of two different muscle relaxants, atracurium and rocuronium, and a switching controller is designed based on a bank of total system mass control laws. Each of such laws is tuned for an individual model from a bank chosen to represent the behavior of the whole population. The control law to be applied at each instant corresponds to the model whose NMB response is closer to the patient's response. Moreover a scheme to improve the reference tracking quality based on the analysis of the patient's response, as well as, a comparison between the switching strategy and the Extended Kalman Kilter (EKF) technique are presented. The results are illustrated by means of several simulations, where switching shows to provide good results, both in theory and in practice, with a desirable reference tracking. The reference tracking improvement technique is able to produce a better reference tracking. Also, this technique showed a better performance than the (EKF). Based on these results, the switching control strategy with a bank of total system mass control laws proved to be robust enough to be used as an automatic control system for the NMB level.

  20. Calibration of Heat Stress Monitor and its Measurement Uncertainty

    NASA Astrophysics Data System (ADS)

    Ekici, Can

    2017-07-01

    Wet-bulb globe temperature (WBGT) equation is a heat stress index that gives information for the workers in the industrial areas. WBGT equation is described in ISO Standard 7243 (ISO 7243 in Hot environments—estimation of the heat stress on working man, based on the WBGT index, ISO, Geneva, 1982). WBGT is the result of the combined quantitative effects of the natural wet-bulb temperature, dry-bulb temperature, and air temperature. WBGT is a calculated parameter. WBGT uses input estimates, and heat stress monitor measures these quantities. In this study, the calibration method of a heat stress monitor is described, and the model function for measurement uncertainty is given. Sensitivity coefficients were derived according to GUM. Two-pressure humidity generators were used to generate a controlled environment. Heat stress monitor was calibrated inside of the generator. Two-pressure humidity generator, which is located in Turkish Standard Institution, was used as the reference device. This device is traceable to national standards. Two-pressure humidity generator includes reference temperature Pt-100 sensors. The reference sensor was sheltered with a wet wick for the calibration of natural wet-bulb thermometer. The reference sensor was centred into a black globe that has got 150 mm diameter for the calibration of the black globe thermometer.

  1. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs

  2. Quantification of tumor perfusion using dynamic contrast-enhanced ultrasound: impact of mathematical modeling

    NASA Astrophysics Data System (ADS)

    Doury, Maxime; Dizeux, Alexandre; de Cesare, Alain; Lucidarme, Olivier; Pellot-Barakat, Claire; Bridal, S. Lori; Frouin, Frédérique

    2017-02-01

    Dynamic contrast-enhanced ultrasound has been proposed to monitor tumor therapy, as a complement to volume measurements. To assess the variability of perfusion parameters in ideal conditions, four consecutive test-retest studies were acquired in a mouse tumor model, using controlled injections. The impact of mathematical modeling on parameter variability was then investigated. Coefficients of variation (CV) of tissue blood volume (BV) and tissue blood flow (BF) based-parameters were estimated inside 32 sub-regions of the tumors, comparing the log-normal (LN) model with a one-compartment model fed by an arterial input function (AIF) and improved by the introduction of a time delay parameter. Relative perfusion parameters were also estimated by normalization of the LN parameters and normalization of the one-compartment parameters estimated with the AIF, using a reference tissue (RT) region. A direct estimation (rRTd) of relative parameters, based on the one-compartment model without using the AIF, was also obtained by using the kinetics inside the RT region. Results of test-retest studies show that absolute regional parameters have high CV, whatever the approach, with median values of about 30% for BV, and 40% for BF. The positive impact of normalization was established, showing a coherent estimation of relative parameters, with reduced CV (about 20% for BV and 30% for BF using the rRTd approach). These values were significantly lower (p  <  0.05) than the CV of absolute parameters. The rRTd approach provided the smallest CV and should be preferred for estimating relative perfusion parameters.

  3. EDP Applications to Musical Bibliography: Input Considerations

    ERIC Educational Resources Information Center

    Robbins, Donald C.

    1972-01-01

    The application of Electronic Data Processing (EDP) has been a boon in the analysis and bibliographic control of music. However, an extra step of encoding must be undertaken for input of music. The best hope to facilitate musical input is the development of an Optical Character Recognition (OCR) music-reading machine. (29 references) (Author/NH)

  4. From Input to Intake: Towards a Brain-Based Perspective of Selective Attention.

    ERIC Educational Resources Information Center

    Sato, Edynn; Jacobs, Bob

    1992-01-01

    Addresses, from a neurobiological perspective, the input-intake distinction commonly made in applied linguistics and the role of selective attention in transforming input to intake. The study places primary emphasis upon a neural structure (the nucleus reticularis thalami) that appears to be essential for selective attention. (79 references)…

  5. A Design of Experiments Approach Defining the Relationships Between Processing and Microstructure for Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.

    2004-01-01

    A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .

  6. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  7. How Much Comprehensible Input Did Heinrich Schliemann Get?

    ERIC Educational Resources Information Center

    Krashen, Stephen D.

    1991-01-01

    Examines Heinrich Schliemann's method of acquiring a second language primarily by means of conscious learning. It is revealed that Schliemann probably obtained a great deal of comprehensible input in English. (nine references) (GLR)

  8. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  9. Comparison of theoretical and experimental thrust performance of a 1030:1 area ratio rocket nozzle at a chamber pressure of 2413 kN/sq m (350 psia)

    NASA Technical Reports Server (NTRS)

    Smith, Tamara A.; Pavli, Albert J.; Kacynski, Kenneth J.

    1987-01-01

    The Joint Army, Navy, NASA, Air Force (JANNAF) rocket-engine performance-prediction procedure is based on the use of various reference computer programs. One of the reference programs for nozzle analysis is the Two-Dimensional Kinetics (TDK) Program. The purpose of this report is to calibrate the JANNAF procedure that has been incorporated into the December 1984 version of the TDK program for the high-area-ratio rocket-engine regime. The calibration was accomplished by modeling the performance of a 1030:1 rocket nozzle tested at NASA Lewis. A detailed description of the test conditions and TDK input parameters is given. The reuslts indicate that the computer code predicts delivered vacuum specific impulse to within 0.12 to 1.9 percent of the experimental data. Vacuum thrust coefficient predictions were within + or - 1.3 percent of experimental results. Predictions of wall static pressure were within approximately + or - 5 percent of the measured values.

  10. Utilization of Global Reference Atmosphere Model (GRAM) for shuttle entry

    NASA Technical Reports Server (NTRS)

    Joosten, Kent

    1987-01-01

    At high latitudes, dispersions in values of density for the middle atmosphere from the Global Reference Atmosphere Model (GRAM) are observed to be large, particularly in the winter. Trajectories have been run from 28.5 deg to 98 deg. The critical part of the atmosphere for reentry is 250,000 to 270,000 ft. 250,000 ft is the altitude where the shuttle trajectory levels out. For ascending passes the critical region occurs near the equator. For descending entries the critical region is in northern latitudes. The computed trajectory is input to the GRAM, which computes means and deviations of atmospheric parameters at each point along the trajectory. There is little latitude dispersion for the ascending passes; the strongest source of deviations is seasonal; however, very wide seasonal and latitudinal deviations are exhibited for the descending passes at all orbital inclinations. For shuttle operations the problem is control to maintain the correct entry corridor and avoid either aerodynamic skipping or excessive heat loads.

  11. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  12. Evaluation de la qualité de modèles numériques de terrain dérivés par interférométrieEvaluación de la calidad de modelos digitales de elevación derivados por interferometría

    NASA Astrophysics Data System (ADS)

    Gens, Rüdiger

    One of the most important uses of SAR interferometry is in the generation of digital elevation models (DEMs). However, a standard procedure for quality estimation of DEMs does not exist. This paper proposes a method of quality estimation using an adapted Monte Carlo simulation, which has the advantage that it could be used in areas where appropriate reference DEMs are not available. This paper also addresses interferometric processing, with special emphasis on the influence of the input parameters. Practical implementation of the proposed technique is shown on a data set from Lower Saxony in Germany. The error map generated, which is a measure of the quality of the DEM, is also presented. For further analysis of the critical aspects of quality, a reference DEM has also been used.

  13. Quantifying uncertainty and sensitivity in sea ice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  14. Nonlinear control of linear parameter varying systems with applications to hypersonic vehicles

    NASA Astrophysics Data System (ADS)

    Wilcox, Zachary Donald

    The focus of this dissertation is to design a controller for linear parameter varying (LPV) systems, apply it specifically to air-breathing hypersonic vehicles, and examine the interplay between control performance and the structural dynamics design. Specifically a Lyapunov-based continuous robust controller is developed that yields exponential tracking of a reference model, despite the presence of bounded, nonvanishing disturbances. The hypersonic vehicle has time varying parameters, specifically temperature profiles, and its dynamics can be reduced to an LPV system with additive disturbances. Since the HSV can be modeled as an LPV system the proposed control design is directly applicable. The control performance is directly examined through simulations. A wide variety of applications exist that can be effectively modeled as LPV systems. In particular, flight systems have historically been modeled as LPV systems and associated control tools have been applied such as gain-scheduling, linear matrix inequalities (LMIs), linear fractional transformations (LFT), and mu-types. However, as the type of flight environments and trajectories become more demanding, the traditional LPV controllers may no longer be sufficient. In particular, hypersonic flight vehicles (HSVs) present an inherently difficult problem because of the nonlinear aerothermoelastic coupling effects in the dynamics. HSV flight conditions produce temperature variations that can alter both the structural dynamics and flight dynamics. Starting with the full nonlinear dynamics, the aerothermoelastic effects are modeled by a temperature dependent, parameter varying state-space representation with added disturbances. The model includes an uncertain parameter varying state matrix, an uncertain parameter varying non-square (column deficient) input matrix, and an additive bounded disturbance. In this dissertation, a robust dynamic controller is formulated for a uncertain and disturbed LPV system. The developed controller is then applied to a HSV model, and a Lyapunov analysis is used to prove global exponential reference model tracking in the presence of uncertainty in the state and input matrices and exogenous disturbances. Simulations with a spectrum of gains and temperature profiles on the full nonlinear dynamic model of the HSV is used to illustrate the performance and robustness of the developed controller. In addition, this work considers how the performance of the developed controller varies over a wide variety of control gains and temperature profiles and are optimized with respect to different performance metrics. Specifically, various temperature profile models and related nonlinear temperature dependent disturbances are used to characterize the relative control performance and effort for each model. Examining such metrics as a function of temperature provides a potential inroad to examine the interplay between structural/thermal protection design and control development and has application for future HSV design and control implementation.

  15. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal

    PubMed Central

    Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215

  16. Application of artificial neural networks to assess pesticide contamination in shallow groundwater

    USGS Publications Warehouse

    Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.

    2006-01-01

    In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.

  17. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    NASA Technical Reports Server (NTRS)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  18. The NASA MSFC Earth Global Reference Atmospheric Model-2007 Version

    NASA Technical Reports Server (NTRS)

    Leslie, F.W.; Justus, C.G.

    2008-01-01

    Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA/Marshall Space Flight Center (MSFC) Global Reference Atmospheric Model (GRAM) was developed in response to the need for a design reference atmosphere that provides complete global geographical variability, and complete altitude coverage (surface to orbital altitudes) as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. A unique feature of GRAM is that, addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations in these atmospheric parameters (e.g. fluctuations due to turbulence and other atmospheric perturbation phenomena). A summary comparing GRAM features to characteristics and features of other reference or standard atmospheric models, can be found Guide to Reference and Standard Atmosphere Models. The original GRAM has undergone a series of improvements over the years with recent additions and changes. The software program is called Earth-GRAM2007 to distinguish it from similar programs for other bodies (e.g. Mars, Venus, Neptune, and Titan). However, in order to make this Technical Memorandum (TM) more readable, the software will be referred to simply as GRAM07 or GRAM unless additional clarity is needed. Section 1 provides an overview of the basic features of GRAM07 including the newly added features. Section 2 provides a more detailed description of GRAM07 and how the model output generated. Section 3 presents sample results. Appendices A and B describe the Global Upper Air Climatic Atlas (GUACA) data and the Global Gridded Air Statistics (GGUAS) database. Appendix C provides instructions for compiling and running GRAM07. Appendix D gives a description of the required NAMELIST format input. Appendix E gives sample output. Appendix F provides a list of available parameters to enable the user to generate special output. Appendix G gives an example and guidance on incorporating GRAM07 as a subroutine in other programs such as trajectory codes or orbital propagation routines.

  19. Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates

    NASA Technical Reports Server (NTRS)

    Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.

    1997-01-01

    Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.

  20. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  1. Towards an SEMG-based tele-operated robot for masticatory rehabilitation.

    PubMed

    Kalani, Hadi; Moghimi, Sahar; Akbarzadeh, Alireza

    2016-08-01

    This paper proposes a real-time trajectory generation for a masticatory rehabilitation robot based on surface electromyography (SEMG) signals. We used two Gough-Stewart robots. The first robot was used as a rehabilitation robot while the second robot was developed to model the human jaw system. The legs of the rehabilitation robot were controlled by the SEMG signals of a tele-operator to reproduce the masticatory motion in the human jaw, supposedly mounted on the moving platform, through predicting the location of a reference point. Actual jaw motions and the SEMG signals from the masticatory muscles were recorded and used as output and input, respectively. Three different methods, namely time-delayed neural networks, time delayed fast orthogonal search, and time-delayed Laguerre expansion technique, were employed and compared to predict the kinematic parameters. The optimal model structures as well as the input delays were obtained for each model and each subject through a genetic algorithm. Equations of motion were obtained by the virtual work method. Fuzzy method was employed to develop a fuzzy impedance controller. Moreover, a jaw model was developed to demonstrate the time-varying behavior of the muscle lengths during the rehabilitation process. The three modeling methods were capable of providing reasonably accurate estimations of the kinematic parameters, although the accuracy and training/validation speed of time-delayed fast orthogonal search were higher than those of the other two aforementioned methods. Also, during a simulation study, the fuzzy impedance scheme proved successful in controlling the moving platform for the accurate navigation of the reference point in the desired trajectory. SEMG has been widely used as a control command for prostheses and exoskeleton robots. However, in the current study by employing the proposed rehabilitation robot the complete continuous profile of the clenching motion was reproduced in the sagittal plane. Copyright © 2016. Published by Elsevier Ltd.

  2. Numerical simulation of groundwater flow in Dar es Salaam Coastal Plain (Tanzania)

    NASA Astrophysics Data System (ADS)

    Luciani, Giulia; Sappa, Giuseppe; Cella, Antonella

    2016-04-01

    They are presented the results of a groundwater modeling study on the Coastal Aquifer of Dar es Salaam (Tanzania). Dar es Salaam is one of the fastest-growing coastal cities in Sub-Saharan Africa, with with more than 4 million of inhabitants and a population growth rate of about 8 per cent per year. The city faces periodic water shortages, due to the lack of an adequate water supply network. These two factors have determined, in the last ten years, an increasing demand of groundwater exploitation, carried on by quite a number of private wells, which have been drilled to satisfy human demand. A steady-state three dimensional groundwater model has been set up by the MODFLOW code, and calibrated with the UCODE code for inverse modeling. The aim of the model was to carry out a characterization of groundwater flow system in the Dar es Salaam Coastal Plain. The inputs applied to the model included net recharge rate, calculated from time series of precipitation data (1961-2012), estimations of average groundwater extraction, and estimations of groundwater recharge, coming from zones, outside the area under study. Parametrization of the hydraulic conductivities was realized referring to the main geological features of the study area, based on available literature data and information. Boundary conditions were assigned based on hydrogeological boundaries. The conceptual model was defined in subsequent steps, which added some hydrogeological features and excluded other ones. Calibration was performed with UCODE 2014, using 76 measures of hydraulic head, taken in 2012 referred to the same season. Data were weighted on the basis of the expected errors. Sensitivity analysis of data was performed during calibration, and permitted to identify which parameters were possible to be estimated, and which data could support parameters estimation. Calibration was evaluated based on statistical index, maps of error distribution and test of independence of residuals. Further model analysis was performed after calibration, to test model performance under a range of variations of input variables.

  3. 1998 Idaho Public Library Statistics and Library Directory. A Compilation of Input and Output Measures and Other Statistics in Reference to Idaho's Public Libraries, Covering the Period October 1, 1997 to September 30, 1998.

    ERIC Educational Resources Information Center

    Nelson, Frank, Comp.

    This report is a compilation of input and output measures and other statistics in reference to Idaho's public libraries, covering the period from October 1997 through September 1998. The introductory sections include notes on the statistics, definitions of performance measures, Idaho public library rankings for fiscal year 1996, and a state map…

  4. Sculpt test problem analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweetser, John David

    2013-10-01

    This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less

  5. Noise adaptation in integrate-and fire neurons.

    PubMed

    Rudd, M E; Brown, L G

    1997-07-01

    The statistical spiking response of an ensemble of identically prepared stochastic integrate-and-fire neurons to a rectangular input current plus gaussian white noise is analyzed. It is shown that, on average, integrate-and-fire neurons adapt to the root-mean-square noise level of their input. This phenomenon is referred to as noise adaptation. Noise adaptation is characterized by a decrease in the average neural firing rate and an accompanying decrease in the average value of the generator potential, both of which can be attributed to noise-induced resets of the generator potential mediated by the integrate-and-fire mechanism. A quantitative theory of noise adaptation in stochastic integrate-and-fire neurons is developed. It is shown that integrate-and-fire neurons, on average, produce transient spiking activity whenever there is an increase in the level of their input noise. This transient noise response is either reduced or eliminated over time, depending on the parameters of the model neuron. Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise. For leaky integrate-and-fire neurons, the long-run noise adaptation is not total, but the response to noise is partially eliminated. Expressions for the probability density function of the generator potential and the first two moments of the potential distribution are derived for the particular case of a nonleaky neuron driven by gaussian white noise of mean zero and constant variance. The functional significance of noise adaptation for the performance of networks comprising integrate-and-fire neurons is discussed.

  6. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  7. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Stagich, B.

    Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  8. Knowledge system and method for simulating chemical controlled release device performance

    DOEpatents

    Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.

    1991-01-01

    A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.

  9. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  10. Achromatic self-referencing interferometer

    DOEpatents

    Feldman, M.

    1994-04-19

    A self-referencing Mach-Zehnder interferometer is described for accurately measuring laser wavefronts over a broad wavelength range (for example, 600 nm to 900 nm). The apparatus directs a reference portion of an input beam to a reference arm and a measurement portion of the input beam to a measurement arm, recombines the output beams from the reference and measurement arms, and registers the resulting interference pattern ([open quotes]first[close quotes] interferogram) at a first detector. Optionally, subportions of the measurement portion are diverted to second and third detectors, which respectively register intensity and interferogram signals which can be processed to reduce the first interferogram's sensitivity to input noise. The reference arm includes a spatial filter producing a high quality spherical beam from the reference portion, a tilted wedge plate compensating for off-axis aberrations in the spatial filter output, and mirror collimating the radiation transmitted through the tilted wedge plate. The apparatus includes a thermally and mechanically stable baseplate which supports all reference arm optics, or at least the spatial filter, tilted wedge plate, and the collimator. The tilted wedge plate is mounted adjustably with respect to the spatial filter and collimator, so that it can be maintained in an orientation in which it does not introduce significant wave front errors into the beam propagating through the reference arm. The apparatus is polarization insensitive and has an equal path length configuration enabling measurement of radiation from broadband as well as closely spaced laser line sources. 3 figures.

  11. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  12. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    USDA-ARS?s Scientific Manuscript database

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  13. TKKMOD: A computer simulation program for an integrated wind diesel system. Version 1.0: Document and user guide

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    1993-12-01

    The document describes TKKMOD, a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout, with special emphasis on the battery submodel and its use in simulation. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project 'Engineering Design Tools for Wind-Diesel Systems' (JOUR-0078). WDLTOOLS serves as the user interface and processes the input and output data of different logistic simulation models developed by the project participants. TKKMOD cannot be run without this shell. The report only describes the simulation principles and model specific parameters of TKKMOD and gives model specific user instructions. The input and output data processing performed outside this model is described in the documentation of the shell. The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, losses in the system components, diesel fuel consumption, and the number of diesel engine starts.

  14. Temperature effects on the universal equation of state of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Smith, J. R.; Rose, J. H.

    1986-01-01

    Recently it has been argued based on theoretical calculations and experimental data that there is a universal form for the equation of state of solids. This observation was restricted to the range of temperatures and pressures such that there are no phase transitions. The use of this universal relation to estimate pressure-volume relations (i.e., isotherms) required three input parameters at each fixed temperature. It is shown that for many solids the input data needed to predict high temperature thermodynamical properties can be dramatically reduced. In particular, only four numbers are needed: (1) the zero pressure (P=0) isothermal bulk modulus; (2)it P=0 pressure derivative; (3) the P=0 volume; and (4) the P=0 thermal expansion; all evaluated at a single (reference) temperature. Explicit predictions are made for the high temperature isotherms, the thermal expansion as a function of temperature, and the temperature variation of the isothermal bulk modulus and its pressure derivative. These predictions are tested using experimental data for three representative solids: gold, sodium chloride, and xenon. Good agreement between theory and experiment is found.

  15. Temperature effects on the universal equation of state of solids

    NASA Technical Reports Server (NTRS)

    Vinet, Pascal; Ferrante, John; Smith, John R.; Rose, James H.

    1987-01-01

    Recently it has been argued based on theoretical calculations and experimental data that there is a universal form for the equation of state of solids. This observation was restricted to the range of temperatures and pressures such that there are no phase transitions. The use of this universal relation to estimate pressure-volume relations (i.e., isotherms) required three input parameters at each fixed temperature. It is shown that for many solids the input data needed to predict high temperature thermodynamical properties can be dramatically reduced. In particular, only four numbers are needed: (1) the zero pressure (P = 0) isothermal bulk modulus; (2) its P = 0 pressure derivative; (3) the P = 0 volume; and (4) the P = 0 thermal expansion; all evaluated at a single (reference) temperature. Explicit predictions are made for the high temperature isotherms, the thermal expansion as a function of temperature, and the temperature variation of the isothermal bulk modulus and its pressure derivative. These predictions are tested using experimental data for three representative solids: gold, sodium chloride, and xenon. Good agreement between theory and experiment is found.

  16. Defining Geodetic Reference Frame using Matlab®: PlatEMotion 2.0

    NASA Astrophysics Data System (ADS)

    Cannavò, Flavio; Palano, Mimmo

    2016-03-01

    We describe the main features of the developed software tool, namely PlatE-Motion 2.0 (PEM2), which allows inferring the Euler pole parameters by inverting the observed velocities at a set of sites located on a rigid block (inverse problem). PEM2 allows also calculating the expected velocity value for any point located on the Earth providing an Euler pole (direct problem). PEM2 is the updated version of a previous software tool initially developed for easy-to-use file exchange with the GAMIT/GLOBK software package. The software tool is developed in Matlab® framework and, as the previous version, includes a set of MATLAB functions (m-files), GUIs (fig-files), map data files (mat-files) and user's manual as well as some example input files. New changes in PEM2 include (1) some bugs fixed, (2) improvements in the code, (3) improvements in statistical analysis, (4) new input/output file formats. In addition, PEM2 can be now run under the majority of operating systems. The tool is open source and freely available for the scientific community.

  17. An Intelligent Crop Planning Tool for Controlled Ecological Life Support Systems

    NASA Technical Reports Server (NTRS)

    Whitaker, Laura O.; Leon, Jorge

    1996-01-01

    This paper describes a crop planning tool developed for the Controlled Ecological Life Support Systems (CELSS) project which is in the research phases at various NASA facilities. The Crop Planning Tool was developed to assist in the understanding of the long term applications of a CELSS environment. The tool consists of a crop schedule generator as well as a crop schedule simulator. The importance of crop planning tools such as the one developed is discussed. The simulator is outlined in detail while the schedule generator is touched upon briefly. The simulator consists of data inputs, plant and human models, and various other CELSS activity models such as food consumption and waste regeneration. The program inputs such as crew data and crop states are discussed. References are included for all nominal parameters used. Activities including harvesting, planting, plant respiration, and human respiration are discussed using mathematical models. Plans provided to the simulator by the plan generator are evaluated for their 'fitness' to the CELSS environment with an objective function based upon daily reservoir levels. Sample runs of the Crop Planning Tool and future needs for the tool are detailed.

  18. Hierarchical design of an electro-hydraulic actuator based on robust LPV methods

    NASA Astrophysics Data System (ADS)

    Németh, Balázs; Varga, Balázs; Gáspár, Péter

    2015-08-01

    The paper proposes a hierarchical control design of an electro-hydraulic actuator, which is used to improve the roll stability of vehicles. The purpose of the control system is to generate a reference torque, which is required by the vehicle dynamic control. The control-oriented model of the actuator is formulated in two subsystems. The high-level hydromotor is described in a linear form, while the low-level spool valve is a polynomial system. These subsystems require different control strategies. At the high level, a linear parameter-varying control is used to guarantee performance specifications. At the low level, a control Lyapunov-function-based algorithm, which creates discrete control input values of the valve, is proposed. The interaction between the two subsystems is guaranteed by the spool displacement, which is control input at the high level and must be tracked at the low-level control. The spool displacement has physical constraints, which must also be incorporated into the control design. The robust design of the high-level control incorporates the imprecision of the low-level control as an uncertainty of the system.

  19. Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Batterson, James G. (Technical Monitor); Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.

  20. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  1. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  2. Applications of Mars Global Reference Atmospheric Model (Mars-GRAM 2005) Supporting Mission Site Selection for Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. One new feature of Mars-GRAM 2005 is the 'auxiliary profile' option. In this option, an input file of temperature and density versus altitude is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5)model) and a global Thermal Emission Spectrometer(TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components,averaged over 5-by-5 degree latitude-longitude bins and 15 degree L(s) bins, for each of three Mars years of TES nadir data. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate Mars Science Laboratory (MSL) landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  3. Reliability of system for precise cold forging

    NASA Astrophysics Data System (ADS)

    Krušič, Vid; Rodič, Tomaž

    2017-07-01

    The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.

  4. Design of ultra-low power biopotential amplifiers for biosignal acquisition applications.

    PubMed

    Zhang, Fan; Holleman, Jeremy; Otis, Brian P

    2012-08-01

    Rapid development in miniature implantable electronics are expediting advances in neuroscience by allowing observation and control of neural activities. The first stage of an implantable biosignal recording system, a low-noise biopotential amplifier (BPA), is critical to the overall power and noise performance of the system. In order to integrate a large number of front-end amplifiers in multichannel implantable systems, the power consumption of each amplifier must be minimized. This paper introduces a closed-loop complementary-input amplifier, which has a bandwidth of 0.05 Hz to 10.5 kHz, an input-referred noise of 2.2 μ Vrms, and a power dissipation of 12 μW. As a point of comparison, a standard telescopic-cascode closed-loop amplifier with a 0.4 Hz to 8.5 kHz bandwidth, input-referred noise of 3.2 μ Vrms, and power dissipation of 12.5 μW is presented. Also for comparison, we show results from an open-loop complementary-input amplifier that exhibits an input-referred noise of 3.6 μ Vrms while consuming 800 nW of power. The two closed-loop amplifiers are fabricated in a 0.13 μ m CMOS process. The open-loop amplifier is fabricated in a 0.5 μm SOI-BiCMOS process. All three amplifiers operate with a 1 V supply.

  5. Computer programs for producing single-event aircraft noise data for specific engine power and meteorological conditions for use with USAF (United States Air Force) community noise model (NOISEMAP)

    NASA Astrophysics Data System (ADS)

    Mohlman, H. T.

    1983-04-01

    The Air Force community noise prediction model (NOISEMAP) is used to describe the aircraft noise exposure around airbases and thereby aid airbase planners to minimize exposure and prevent community encroachment which could limit mission effectiveness of the installation. This report documents two computer programs (OMEGA 10 and OMEGA 11) which were developed to prepare aircraft flight and ground runup noise data for input to NOISEMAP. OMEGA 10 is for flight operations and OMEGA 11 is for aircraft ground runups. All routines in each program are documented at a level useful to a programmer working with the code or a reader interested in a general overview of what happens within a specific subroutine. Both programs input normalized, reference aircraft noise data; i.e., data at a standard reference distance from the aircraft, for several fixed engine power settings, a reference airspeed and standard day meteorological conditions. Both programs operate on these normalized, reference data in accordance with user-defined, non-reference conditions to derive single-event noise data for 22 distances (200 to 25,000 feet) in a variety of physical and psycho-acoustic metrics. These outputs are in formats ready for input to NOISEMAP.

  6. Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.

    PubMed

    Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J

    2017-09-01

    Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Kenneth L.; Sturcken, Noah Andrew

    Power controller includes an output terminal having an output voltage, at least one clock generator to generate a plurality of clock signals and a plurality of hardware phases. Each hardware phase is coupled to the at least one clock generator and the output terminal and includes a comparator. Each hardware phase is configured to receive a corresponding one of the plurality of clock signals and a reference voltage, combine the corresponding clock signal and the reference voltage to produce a reference input, generate a feedback voltage based on the output voltage, compare the reference input and the feedback voltage usingmore » the comparator and provide a comparator output to the output terminal, whereby the comparator output determines a duty cycle of the power controller. An integrated circuit including the power controller is also provided.« less

  8. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  9. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  10. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  11. The effects of parameter variation on MSET models of the Crystal River-3 feedwater flow system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miron, A.

    1998-04-01

    In this paper we develop further the results reported in Reference 1 to include a systematic study of the effects of varying MSET models and model parameters for the Crystal River-3 (CR) feedwater flow system The study used archived CR process computer files from November 1-December 15, 1993 that were provided by Florida Power Corporation engineers Fairman Bockhorst and Brook Julias. The results support the conclusion that an optimal MSET model, properly trained and deriving its inputs in real-time from no more than 25 of the sensor signals normally provided to a PWR plant process computer, should be able tomore » reliably detect anomalous variations in the feedwater flow venturis of less than 0.1% and in the absence of a venturi sensor signal should be able to generate a virtual signal that will be within 0.1% of the correct value of the missing signal.« less

  12. Online plasma calculator

    NASA Astrophysics Data System (ADS)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  13. Effectiveness of a multi-channel volumetric air receiver for a solar power tower

    NASA Astrophysics Data System (ADS)

    Jung, Eui Guk; Boo, Joon Hong; Kang, Yong Heak; Kim, Nak Hoon

    2013-08-01

    In this study, the heat transfer performance of a multi-channel volumetric air receiver for a solar power tower was numerically analyzed. The governing equations, including the solar radiation heat flux, conduction, convection and radiation heat transfer for a single channel, were solved on the basis of valid related references and a methodology that can predict the temperature distribution of the receiver wall and the heat transfer fluid for specific dimensions and input conditions. Furthermore, a mathematical model of the effectiveness of the receiver was derived from an analysis of the temperature profiles of the wall and the heat transfer fluid. The receiver effectiveness as an appropriate criterion to assess economic feasibility regarding geometric size was investigated, as it would be applied to the design process of the receiver. The main parameters for the thermal performance simulations described in this paper are the air mass flow rate, receiver length and the influence of these parameters on the heat transfer performance from the viewpoint of receiver efficiency and effectiveness.

  14. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  15. Developing a calibrated CONUS-wide watershed-scale simulation platform for quantifying the influence of different sources of uncertainty on streamflow forecast skill

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Sampson, K. M.; Wood, A. W.; Hopson, T. M.; Brekke, L. D.; Arnold, J.; Raff, D. A.; Clark, M. P.

    2013-12-01

    Skill in model-based hydrologic forecasting depends on the ability to estimate a watershed's initial moisture and energy conditions, to forecast future weather and climate inputs, and on the quality of the hydrologic model's representation of watershed processes. The impact of these factors on prediction skill varies regionally, seasonally, and by model. We are investigating these influences using a watershed simulation platform that spans the continental US (CONUS), encompassing a broad range of hydroclimatic variation, and that uses the current simulation models of National Weather Service streamflow forecasting operations. The first phase of this effort centered on the implementation and calibration of the SNOW-17 and Sacramento soil moisture accounting (SAC-SMA) based hydrologic modeling system for a range of watersheds. The base configuration includes 630 basins in the United States Geological Survey's Hydro-Climatic Data Network 2009 (HCDN-2009, Lins 2012) conterminous U.S. basin subset. Retrospective model forcings were derived from Daymet (http://daymet.ornl.gov/), and where available, a priori parameter estimates were based on or compared with the operational NWS model parameters. Model calibration was accomplished by several objective, automated strategies, including the shuffled complex evolution (SCE) optimization approach developed within the NWS in the early 1990s (Duan et al. 1993). This presentation describes outcomes from this effort, including insights about measuring simulation skill, and on relationships between simulation skill and model parameters, basin characteristics (climate, topography, vegetation, soils), and the quality of forcing inputs. References: %Z Thornton, P.; Thornton, M.; Mayer, B.; Wilhelmi, N.; Wei, Y.; Devarakonda, R; Cook, R. Daymet: Daily Surface Weather on a 1 km Grid for North America. 1980-2008; Oak Ridge National Laboratory Distributed Active Archive Center: Oak Ridge, TN, USA, 2012; Volume 10.

  16. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  17. Update on ɛK with lattice QCD inputs

    NASA Astrophysics Data System (ADS)

    Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon

    2018-03-01

    We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.

  18. Analysis of the NAEG model of transuranic radionuclide transport and dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercher, J.R.; Anspaugh, L.R.

    We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less

  19. The Comprehensible Output Hypothesis and Self-directed Learning: A Learner's Perspective.

    ERIC Educational Resources Information Center

    Liming, Yu

    1990-01-01

    Discusses the significance to language acquisition of pushing for comprehensible output. Three issues are examined: (1) comprehensible output and negative input, (2) comprehensible and incomprehensible output, and (3) comprehensible output and comprehensible input. (28 references) (GLR)

  20. A fast, robust algorithm for power line interference cancellation in neural recording.

    PubMed

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

  1. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed algorithm features a highly robust operation, fast adaptation to interference variations, significant SNR improvement, low computational complexity and memory requirement and straightforward parameter adjustment. These features render the algorithm suitable for wearable and implantable sensor applications, where reliable and real-time cancellation of the interference is desired.

  2. Modern control concepts in hydrology. [parameter identification in adaptive stochastic control approach

    NASA Technical Reports Server (NTRS)

    Duong, N.; Winn, C. B.; Johnson, G. R.

    1975-01-01

    Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  3. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  4. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  5. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  6. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  7. Econometric analysis of fire suppression production functions for large wildland fires

    Treesearch

    Thomas P. Holmes; David E. Calkin

    2013-01-01

    In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...

  8. A mathematical model for predicting fire spread in wildland fuels

    Treesearch

    Richard C. Rothermel

    1972-01-01

    A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.

  9. The application of remote sensing to the development and formulation of hydrologic planning models

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.

    1976-01-01

    A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.

  10. A data-input program (MFI2005) for the U.S. Geological Survey modular groundwater model (MODFLOW-2005) and parameter estimation program (UCODE_2005)

    USGS Publications Warehouse

    Harbaugh, Arien W.

    2011-01-01

    The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.

  11. Adaptive control of Parkinson's state based on a nonlinear computational model with unknown parameters.

    PubMed

    Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan

    2015-02-01

    The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.

  12. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  13. Estimating unknown input parameters when implementing the NGA ground-motion prediction equations in engineering practice

    USGS Publications Warehouse

    Kaklamanos, James; Baise, Laurie G.; Boore, David M.

    2011-01-01

    The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.

  14. Dual side control for inductive power transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron

    An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less

  15. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.

  16. Input design for identification of aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Gupta, N. K.; Hall, W. E., Jr.

    1975-01-01

    An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.

  17. Transport and fate of radionuclides in aquatic environments--the use of ecosystem modelling for exposure assessments of nuclear facilities.

    PubMed

    Kumblad, L; Kautsky, U; Naeslund, B

    2006-01-01

    In safety assessments of nuclear facilities, a wide range of radioactive isotopes and their potential hazard to a large assortment of organisms and ecosystem types over long time scales need to be considered. Models used for these purposes have typically employed approaches based on generic reference organisms, stylised environments and transfer functions for biological uptake exclusively based on bioconcentration factors (BCFs). These models are of non-mechanistic nature and involve no understanding of uptake and transport processes in the environment, which is a severe limitation when assessing real ecosystems. In this paper, ecosystem models are suggested as a method to include site-specific data and to facilitate the modelling of dynamic systems. An aquatic ecosystem model for the environmental transport of radionuclides is presented and discussed. With this model, driven and constrained by site-specific carbon dynamics and three radionuclide specific mechanisms: (i) radionuclide uptake by plants, (ii) excretion by animals, and (iii) adsorption to organic surfaces, it was possible to estimate the radionuclide concentrations in all components of the modelled ecosystem with only two radionuclide specific input parameters (BCF for plants and Kd). The importance of radionuclide specific mechanisms for the exposure to organisms was examined, and probabilistic and sensitivity analyses to assess the uncertainties related to ecosystem input parameters were performed. Verification of the model suggests that this model produces analogous results to empirically derived data for more than 20 different radionuclides.

  18. On the selection of user-defined parameters in data-driven stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Priori, C.; De Angelis, M.; Betti, R.

    2018-02-01

    The paper focuses on the time domain output-only technique called Data-Driven Stochastic Subspace Identification (DD-SSI); in order to identify modal models (frequencies, damping ratios and mode shapes), the role of its user-defined parameters is studied, and rules to determine their minimum values are proposed. Such investigation is carried out using, first, the time histories of structural responses to stationary excitations, with a large number of samples, satisfying the hypothesis on the input imposed by DD-SSI. Then, the case of non-stationary seismic excitations with a reduced number of samples is considered. In this paper, partitions of the data matrix different from the one proposed in the SSI literature are investigated, together with the influence of different choices of the weighting matrices. The study is carried out considering two different applications: (1) data obtained from vibration tests on a scaled structure and (2) in-situ tests on a reinforced concrete building. Referring to the former, the identification of a steel frame structure tested on a shaking table is performed using its responses in terms of absolute accelerations to a stationary (white noise) base excitation and to non-stationary seismic excitations of low intensity. Black-box and modal models are identified in both cases and the results are compared with those from an input-output subspace technique. With regards to the latter, the identification of a complex hospital building is conducted using data obtained from ambient vibration tests.

  19. Concentration distribution and assessment of several heavy metals in sediments of west-four Pearl River Estuary

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Cao, Zhimin; Lan, Dongzhao; Zheng, Zhichang; Li, Guihai

    2008-09-01

    Grain size parameters, trace metals (Co, Cu, Ni, Pb, Cr, Zn, Ba, Zr and Sr) and total organic matter (TOM) of 38 surficial sediments and a sediment core of west-four Pearl River Estuary region were analyzed. The spacial distribution and the transportation procession of the chemical element in surficial sediments were studied mainly. Multivariate statistics are used to analyses the interrelationship of metal elements, TOM and the grain size parameters. The results demonstrated that terrigenous sediment taken by the rivers are main sources of the trace metal elements and TOM, and the lithology of parent material is a dominating factor controlling the trace metal composition in the surficial sediment. In addition, the hydrodynamic condition and landform are the dominating factors controlling the large-scale distribution, while the anthropogenic input in the coastal area alters the regional distribution of heavy metal elements Co, Cu, Ni, Pb, Cr and Zn. The enrichment factor (EF) analysis was used for the differentiation of the metal source between anthropogenic and naturally occurring, and for the assessment of the anthropogenic influence, the deeper layer content of heavy metals were calculated as the background values and Zr was chosen as the reference element for Co, Cu, Ni, Pb, Cr and Zn. The result indicate prevalent enrichment of Co, Cu, Ni, Pb and Cr, and the contamination of Pb is most obvious, further more, the peculiar high EF value sites of Zn and Pb probably suggest point source input.

  20. Self-calibrating threshold detector

    NASA Technical Reports Server (NTRS)

    Barnes, J. R.; Huang, M. Y. (Inventor)

    1980-01-01

    A self calibrating threshold detector comprises a single demodulating channel which includes a mixer having one input receiving the incoming signal and another input receiving a local replica code. During a short time interval, an incorrect local code is applied to the mixer to incorrectly demodulate the incoming signal and to provide a reference level that calibrates the noise propagating through the channel. A sample and hold circuit is coupled to the channel for storing a sample of the reference level. During a relatively long time interval, the correct replica code provides an output level which ranges between the reference level and a maximum level that represents incoming signal presence and synchronism with the replica code. A summer substracts the stored sample reference from the output level to provide a resultant difference signal indicative of the acquisition of the expected signal.

  1. The quality estimation of exterior wall’s and window filling’s construction design

    NASA Astrophysics Data System (ADS)

    Saltykov, Ivan; Bovsunovskaya, Maria

    2017-10-01

    The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.

  2. The Use of Artificial Neural Networks for Forecasting the Electric Demand of Stand-Alone Consumers

    NASA Astrophysics Data System (ADS)

    Ivanin, O. A.; Direktor, L. B.

    2018-05-01

    The problem of short-term forecasting of electric power demand of stand-alone consumers (small inhabited localities) situated outside centralized power supply areas is considered. The basic approaches to modeling the electric power demand depending on the forecasting time frame and the problems set, as well as the specific features of such modeling, are described. The advantages and disadvantages of the methods used for the short-term forecast of the electric demand are indicated, and difficulties involved in the solution of the problem are outlined. The basic principles of arranging artificial neural networks are set forth; it is also shown that the proposed method is preferable when the input information necessary for prediction is lacking or incomplete. The selection of the parameters that should be included into the list of the input data for modeling the electric power demand of residential areas using artificial neural networks is validated. The structure of a neural network is proposed for solving the problem of modeling the electric power demand of residential areas. The specific features of generation of the training dataset are outlined. The results of test modeling of daily electric demand curves for some settlements of Kamchatka and Yakutia based on known actual electric demand curves are provided. The reliability of the test modeling has been validated. A high value of the deviation of the modeled curve from the reference curve obtained in one of the four reference calculations is explained. The input data and the predicted power demand curves for the rural settlement of Kuokuiskii Nasleg are provided. The power demand curves were modeled for four characteristic days of the year, and they can be used in the future for designing a power supply system for the settlement. To enhance the accuracy of the method, a series of measures based on specific features of a neural network's functioning are proposed.

  3. Advanced Food Science and Nutrition Reference Book.

    ERIC Educational Resources Information Center

    Texas Tech Univ., Lubbock. Home Economics Curriculum Center.

    Developed with input from personnel in the industries, this reference book complements the curriculum guide for a laboratory course on the significance of nutrition in food science. The reference book is organized into 25 chapters, each beginning with essential elements and objectives. Within the text, italicized, bold-faced vocabulary terms are…

  4. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  5. Measurement of myocardial blood flow by cardiovascular magnetic resonance perfusion: comparison of distributed parameter and Fermi models with single and dual bolus.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik

    2015-02-17

    Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.

  6. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  7. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  9. How can we tackle energy efficiency in IoT based smart buildings?

    PubMed

    Moreno, M Victoria; Úbeda, Benito; Skarmeta, Antonio F; Zamora, Miguel A

    2014-05-30

    Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario.

  10. How can We Tackle Energy Efficiency in IoT Based Smart Buildings?

    PubMed Central

    Moreno, M. Victoria; Úbeda, Benito; Skarmeta, Antonio F.; Zamora, Miguel A.

    2014-01-01

    Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario. PMID:24887040

  11. Multiple-try differential evolution adaptive Metropolis for efficient solution of highly parameterized models

    NASA Astrophysics Data System (ADS)

    Eric, L.; Vrugt, J. A.

    2010-12-01

    Spatially distributed hydrologic models potentially contain hundreds of parameters that need to be derived by calibration against a historical record of input-output data. The quality of this calibration strongly determines the predictive capability of the model and thus its usefulness for science-based decision making and forecasting. Unfortunately, high-dimensional optimization problems are typically difficult to solve. Here we present our recent developments to the Differential Evolution Adaptive Metropolis (DREAM) algorithm (Vrugt et al., 2009) to warrant efficient solution of high-dimensional parameter estimation problems. The algorithm samples from an archive of past states (Ter Braak and Vrugt, 2008), and uses multiple-try Metropolis sampling (Liu et al., 2000) to decrease the required burn-in time for each individual chain and increase efficiency of posterior sampling. This approach is hereafter referred to as MT-DREAM. We present results for 2 synthetic mathematical case studies, and 2 real-world examples involving from 10 to 240 parameters. Results for those cases show that our multiple-try sampler, MT-DREAM, can consistently find better solutions than other Bayesian MCMC methods. Moreover, MT-DREAM is admirably suited to be implemented and ran on a parallel machine and is therefore a powerful method for posterior inference.

  12. Fuzzy logic, artificial neural network and mathematical model for prediction of white mulberry drying kinetics

    NASA Astrophysics Data System (ADS)

    Jahedi Rad, Shahpour; Kaveh, Mohammad; Sharabiani, Vali Rasooli; Taghinezhad, Ebrahim

    2018-05-01

    The thin-layer convective- infrared drying behavior of white mulberry was experimentally studied at infrared power levels of 500, 1000 and 1500 W, drying air temperatures of 40, 55 and 70 °C and inlet drying air speeds of 0.4, 1 and 1.6 m/s. Drying rate raised with the rise of infrared power levels at a distinct air temperature and velocity and thus decreased the drying time. Five mathematical models describing thin-layer drying have been fitted to the drying data. Midlli et al. model could satisfactorily describe the convective-infrared drying of white mulberry fruit with the values of the correlation coefficient (R 2=0.9986) and root mean square error of (RMSE= 0.04795). Artificial neural network (ANN) and fuzzy logic methods was desirably utilized for modeling output parameters (moisture ratio (MR)) regarding input parameters. Results showed that output parameters were more accurately predicted by fuzzy model than by the ANN and mathematical models. Correlation coefficient (R 2) and RMSE generated by the fuzzy model (respectively 0.9996 and 0.01095) were higher than referred values for the ANN model (0.9990 and 0.01988 respectively).

  13. Desktop Application Program to Simulate Cargo-Air-Drop Tests

    NASA Technical Reports Server (NTRS)

    Cuthbert, Peter

    2009-01-01

    The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.

  14. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  15. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  16. Meter circuit for tuning RF amplifiers

    NASA Technical Reports Server (NTRS)

    Longthorne, J. E.

    1973-01-01

    Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.

  17. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  18. Ring rolling process simulation for microstructure optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.

  19. Next-Generation NATO Reference Mobility Model (NRMM) Development (Developpement de la nouvella generation du modele de mobilite de reference de l’OTAN (NRMM))

    DTIC Science & Technology

    2018-01-01

    Profile Database E-17 Attachment 2: NRMM Data Input Requirements E-25 Attachment 3: General Physics -Based Model Data Input Requirements E-28...E-15 Figure E-11 Examples of Unique Surface Types E-20 Figure E-12 Correlating Physical Testing with Simulation E-21 Figure E-13 Simplified Tire...Table 10-8 Scoring Values 10-19 Table 10-9 Accuracy – Physics -Based 10-20 Table 10-10 Accuracy – Validation Through Measurement 10-22 Table 10-11

  20. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.

    PubMed

    Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay

    2015-12-01

    In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.

  1. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  2. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  3. Emissions-critical charge cooling using an organic rankine cycle

    DOEpatents

    Ernst, Timothy C.; Nelson, Christopher R.

    2014-07-15

    The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.

  4. ZERO SUPPRESSION FOR RECORDERS

    DOEpatents

    Fort, W.G.S.

    1958-12-30

    A zero-suppression circuit for self-balancing recorder instruments is presented. The essential elements of the circuit include a converter-amplifier having two inputs, one for a reference voltage and the other for the signal voltage under analysis, and a servomotor with two control windings, one coupled to the a-c output of the converter-amplifier and the other receiving a reference input. Each input circuit to the converter-amplifier has a variable potentiometer and the sliders of the potentiometer are ganged together for movement by the servoinotor. The particular noveity of the circuit resides in the selection of resistance values for the potentiometer and a resistor in series with the potentiometer of the signal circuit to ensure the full value of signal voltage variation is impressed on a recorder mechanism driven by servomotor.

  5. Master control data handling program uses automatic data input

    NASA Technical Reports Server (NTRS)

    Alliston, W.; Daniel, J.

    1967-01-01

    General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.

  6. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  7. Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...

  8. A Hybrid Semi-Digital Transimpedance Amplifier With Noise Cancellation Technique for Nanopore-Based DNA Sequencing.

    PubMed

    Hsu, Chung-Lun; Jiang, Haowei; Venkatesh, A G; Hall, Drew A

    2015-10-01

    Over the past two decades, nanopores have been a promising technology for next generation deoxyribonucleic acid (DNA) sequencing. Here, we present a hybrid semi-digital transimpedance amplifier (HSD-TIA) to sense the minute current signatures introduced by single-stranded DNA (ssDNA) translocating through a nanopore, while discharging the baseline current using a semi-digital feedback loop. The amplifier achieves fast settling by adaptively tuning a DC compensation current when a step input is detected. A noise cancellation technique reduces the total input-referred current noise caused by the parasitic input capacitance. Measurement results show the performance of the amplifier with 31.6 M Ω mid-band gain, 950 kHz bandwidth, and 8.5 fA/ √Hz input-referred current noise, a 2× noise reduction due to the noise cancellation technique. The settling response is demonstrated by observing the insertion of a protein nanopore in a lipid bilayer. Using the nanopore, the HSD-TIA was able to measure ssDNA translocation events.

  9. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    NASA Astrophysics Data System (ADS)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.

  10. Sensitivity of drainage efficiency of cranberry fields to edaphic conditions

    NASA Astrophysics Data System (ADS)

    Periard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean; Hallema, Dennis W.

    2014-05-01

    Water management on a cranberry farm requires intelligent irrigation and drainage strategies to sustain strong productivity and minimize environmental impact. For example, to avoid propagation of disease and meet evapotranspiration demand, it is imperative to maintain optimal moisture conditions in the root zone, which depends on an efficient drainage system. However, several drainage problems have been identified in cranberry fields. Most of these drainage problems are due to the presence of a restrictive layer in the soil profile (Gumiere et al., 2014). The objective of this work is to evaluate the effects of a restrictive layer on the drainage efficiency by the bias of a multi-local sensitivity analysis. We have tested the sensitivity of the drainage efficiency to different input parameters set of soil hydraulic properties, geometrical parameters and climatic conditions. Soil water flux dynamic for every input parameters set was simulated with finite element model Hydrus 1D (Simanek et al., 2008). Multi-local sensitivity was calculated with the Gâteaux directional derivatives with the procedure described by Cheviron et al. (2010). Results indicate that drainage efficiency is more sensitive to soil hydraulic properties than geometrical parameters and climatic conditions. Then, the geometrical parameters of the depth are more sensitive than the thickness. The drainage efficiency was very insensitive to the climatic conditions. Understanding the sensitivity of drainage efficiency according to soil hydraulic properties, geometrical and climatic conditions are essential for diagnosis drainage problems. However, it becomes important to identify the mechanisms involved in the genesis of anthropogenic soils cranberry to identify conditions that may lead to the formation of a restrictive layer. References: Cheviron, B., S.J. Gumiere, Y. Le Bissonnais, R. Moussa and D. Raclot. 2010. Sensitivity analysis of distributed erosion models: Framework. Water Resources Research 46: W08508. doi:10.1029/2009WR007950. Gumiere, S.J., J. Lafond, D. W. Hallema, Y. Périard, J. Caron et J. Gallichand. 2014. Mapping soil hydraulic conductivity and matric potential for water management of cranberry: Characterization and spatial interpolation methods. Biosystems Engineering.

  11. Program for User-Friendly Management of Input and Output Data Sets

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard

    2003-01-01

    A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.

  12. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  13. Optimum free energy in the reference functional approach for the integral equations theory

    NASA Astrophysics Data System (ADS)

    Ayadim, A.; Oettel, M.; Amokrane, S.

    2009-03-01

    We investigate the question of determining the bulk properties of liquids, required as input for practical applications of the density functional theory of inhomogeneous systems, using density functional theory itself. By considering the reference functional approach in the test particle limit, we derive an expression of the bulk free energy that is consistent with the closure of the Ornstein-Zernike equations in which the bridge functions are obtained from the reference system bridge functional. By examining the connection between the free energy functional and the formally exact bulk free energy, we obtain an improved expression of the corresponding non-local term in the standard reference hypernetted chain theory derived by Lado. In this way, we also clarify the meaning of the recently proposed criterion for determining the optimum hard-sphere diameter in the reference system. This leads to a theory in which the sole input is the reference system bridge functional both for the homogeneous system and the inhomogeneous one. The accuracy of this method is illustrated with the standard case of the Lennard-Jones fluid and with a Yukawa fluid with very short range attraction.

  14. Prediction of Welded Joint Strength in Plasma Arc Welding: A Comparative Study Using Back-Propagation and Radial Basis Neural Networks

    NASA Astrophysics Data System (ADS)

    Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.

    2016-09-01

    Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.

  15. Multi-response optimization of process parameters for GTAW process in dissimilar welding of Incoloy 800HT and P91 steel by using grey relational analysis

    NASA Astrophysics Data System (ADS)

    vellaichamy, Lakshmanan; Paulraj, Sathiya

    2018-02-01

    The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.

  16. Calibration of discrete element model parameters: soybeans

    NASA Astrophysics Data System (ADS)

    Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal

    2018-05-01

    Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.

  17. AIRCRAFT REACTOR CONTROL SYSTEM APPLICABLE TO TURBOJET AND TURBOPROP POWER PLANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorker, G.E.

    1955-07-19

    Control systems proposed for direct cycle nuclear powered aircraft commonly involve control of engine speed, nuclear energy input, and chcmical energy input. A system in which these parameters are controlled by controlling the total energy input, the ratio of nuclear and chemical energy input, and the engine speed is proposed. The system is equally applicable to turbojet or turboprop applications. (auth)

  18. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  19. Utilizing Mars Global Reference Atmospheric Model (Mars-GRAM 2005) to Evaluate Entry Probe Mission Sites

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering-level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. The "auxiliary profile" option is one new feature of Mars-GRAM 2005. This option uses an input file of temperature and density versus altitude to replace the mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. Any source of data or alternate model output can be used to generate an auxiliary profile. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) and a global Thermal Emission Spectrometer (TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude-longitude bins and 15 degree Ls bins, for each of three Mars years of TES nadir data. The Mars Science Laboratory (MSL) sites are used as a sample of how Mars-GRAM' could be a valuable tool for planning of future Mars entry probe missions. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate MSL landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  20. Modeling and closed-loop control of hypnosis by means of bispectral index (BIS) with isoflurane.

    PubMed

    Gentilini, A; Rossoni-Gerosa, M; Frei, C W; Wymann, R; Morari, M; Zbinden, A M; Schnider, T W

    2001-08-01

    A model-based closed-loop control system is presented to regulate hypnosis with the volatile anesthetic isoflurane. Hypnosis is assessed by means of the bispectral index (BIS), a processed parameter derived from the electroencephalogram. Isoflurane is administered through a closed-circuit respiratory system. The model for control was identified on a population of 20 healthy volunteers. It consists of three parts: a model for the respiratory system, a pharmacokinetic model and a pharmacodynamic model to predict BIS at the effect compartment. A cascaded internal model controller is employed. The master controller compares the actual BIS and the reference value set by the anesthesiologist and provides expired isoflurane concentration references to the slave controller. The slave controller maneuvers the fresh gas anesthetic concentration entering the respiratory system. The controller is designed to adapt to different respiratory conditions. Anti-windup measures protect against performance degradation in the event of saturation of the input signal. Fault detection schemes in the controller cope with BIS and expired concentration measurement artifacts. The results of clinical studies on humans are presented.

  1. A High-Linearity Low-Noise Amplifier with Variable Bandwidth for Neural Recoding Systems

    NASA Astrophysics Data System (ADS)

    Yoshida, Takeshi; Sueishi, Katsuya; Iwata, Atsushi; Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi

    2011-04-01

    This paper describes a low-noise amplifier with multiple adjustable parameters for neural recording applications. An adjustable pseudo-resistor implemented by cascade metal-oxide-silicon field-effect transistors (MOSFETs) is proposed to achieve low-signal distortion and wide variable bandwidth range. The amplifier has been implemented in 0.18 µm standard complementary metal-oxide-semiconductor (CMOS) process and occupies 0.09 mm2 on chip. The amplifier achieved a selectable voltage gain of 28 and 40 dB, variable bandwidth from 0.04 to 2.6 Hz, total harmonic distortion (THD) of 0.2% with 200 mV output swing, input referred noise of 2.5 µVrms over 0.1-100 Hz and 18.7 µW power consumption at a supply voltage of 1.8 V.

  2. Fidelity deviation in quantum teleportation

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir

    2018-04-01

    We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.

  3. Flight Test Maneuvers for Efficient Aerodynamic Modeling

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2011-01-01

    Novel flight test maneuvers for efficient aerodynamic modeling were developed and demonstrated in flight. Orthogonal optimized multi-sine inputs were applied to aircraft control surfaces to excite aircraft dynamic response in all six degrees of freedom simultaneously while keeping the aircraft close to chosen reference flight conditions. Each maneuver was designed for a specific modeling task that cannot be adequately or efficiently accomplished using conventional flight test maneuvers. All of the new maneuvers were first described and explained, then demonstrated on a subscale jet transport aircraft in flight. Real-time and post-flight modeling results obtained using equation-error parameter estimation in the frequency domain were used to show the effectiveness and efficiency of the new maneuvers, as well as the quality of the aerodynamic models that can be identified from the resultant flight data.

  4. Effect of Heat Input on Geometry of Austenitic Stainless Steel Weld Bead on Low Carbon Steel

    NASA Astrophysics Data System (ADS)

    Saha, Manas Kumar; Hazra, Ritesh; Mondal, Ajit; Das, Santanu

    2018-05-01

    Among different weld cladding processes, gas metal arc welding (GMAW) cladding becomes a cost effective, user friendly, versatile method for protecting the surface of relatively lower grade structural steels from corrosion and/or erosion wear by depositing high grade stainless steels onto them. The quality of cladding largely depends upon the bead geometry of the weldment deposited. Weld bead geometry parameters, like bead width, reinforcement height, depth of penetration, and ratios like reinforcement form factor (RFF) and penetration shape factor (PSF) determine the quality of the weld bead geometry. Various process parameters of gas metal arc welding like heat input, current, voltage, arc travel speed, mode of metal transfer, etc. influence formation of bead geometry. In the current experimental investigation, austenite stainless steel (316) weld beads are formed on low alloy structural steel (E350) by GMAW using 100% CO2 as the shielding gas. Different combinations of current, voltage and arc travel speed are chosen so that heat input increases from 0.35 to 0.75 kJ/mm. Nine number of weld beads are deposited and replicated twice. The observations show that weld bead width increases linearly with increase in heat input, whereas reinforcement height and depth of penetration do not increase with increase in heat input. Regression analysis is done to establish the relationship between heat input and different geometrical parameters of weld bead. The regression models developed agrees well with the experimental data. Within the domain of the present experiment, it is observed that at higher heat input, the weld bead gets wider having little change in penetration and reinforcement; therefore, higher heat input may be recommended for austenitic stainless steel cladding on low alloy steel.

  5. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  6. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  7. Combining in silico evolution and nonlinear dimensionality reduction to redesign responses of signaling networks

    NASA Astrophysics Data System (ADS)

    Prescott, Aaron M.; Abel, Steven M.

    2016-12-01

    The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.

  8. Identification of modal parameters including unmeasured forces and transient effects

    NASA Astrophysics Data System (ADS)

    Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.

    2003-08-01

    In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.

  9. Modern control concepts in hydrology

    NASA Technical Reports Server (NTRS)

    Duong, N.; Johnson, G. R.; Winn, C. B.

    1974-01-01

    Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.

  10. Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate

    NASA Astrophysics Data System (ADS)

    Haq, Nandinee Fariah; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2014-03-01

    Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.

  11. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  12. Experimental Validation of Strategy for the Inverse Estimation of Mechanical Properties and Coefficient of Friction in Flat Rolling

    NASA Astrophysics Data System (ADS)

    Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker

    2017-08-01

    Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.

  13. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity

    PubMed Central

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-01-01

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity). PMID:25976626

  14. Neural feedback for instantaneous spatiotemporal modulation of afferent pathways in bi-directional brain-machine interfaces.

    PubMed

    Liu, Jianbo; Khalil, Hassan K; Oweiss, Karim G

    2011-10-01

    In bi-directional brain-machine interfaces (BMIs), precisely controlling the delivery of microstimulation, both in space and in time, is critical to continuously modulate the neural activity patterns that carry information about the state of the brain-actuated device to sensory areas in the brain. In this paper, we investigate the use of neural feedback to control the spatiotemporal firing patterns of neural ensembles in a model of the thalamocortical pathway. Control of pyramidal (PY) cells in the primary somatosensory cortex (S1) is achieved based on microstimulation of thalamic relay cells through multiple-input multiple-output (MIMO) feedback controllers. This closed loop feedback control mechanism is achieved by simultaneously varying the stimulation parameters across multiple stimulation electrodes in the thalamic circuit based on continuous monitoring of the difference between reference patterns and the evoked responses of the cortical PY cells. We demonstrate that it is feasible to achieve a desired level of performance by controlling the firing activity pattern of a few "key" neural elements in the network. Our results suggest that neural feedback could be an effective method to facilitate the delivery of information to the cortex to substitute lost sensory inputs in cortically controlled BMIs.

  15. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity.

    PubMed

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-05-15

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).

  16. Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny

    2015-05-01

    Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).

  17. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    PubMed

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  19. Influence of tool geometry and processing parameters on welding defects and mechanical properties for friction stir welding of 6061 Aluminium alloy

    NASA Astrophysics Data System (ADS)

    Daneji, A.; Ali, M.; Pervaiz, S.

    2018-04-01

    Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.

  20. The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vercesi, J.; Surian, E.

    Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less

  1. Impact of reduced tillage and organic inputs on aggregate stability and earthworm community in a Breton context in France

    NASA Astrophysics Data System (ADS)

    Paillat, Louise; Menasseri, Safya; Busnot, Sylvain; Roucaute, Marc; Benard, Yannick; Morvan, Thierry; Pérès, Guénola

    2017-04-01

    Soil aggregate stability, which refers to the ability of soil aggregates to resist breakdown when disruptive forces are applied (water, wind), is a good indicator of the sensitivity of soil to crusting and erosion and is a relevant indicator for soil stability. Within soil parameters which affect soil stability, organic matter is one of the main important by functioning as bonding agent between mineral soil particles, but soil organisms such as microorganisms and earthworms are also recognized as efficient agents. However the relationship between earthworms, fungal hyphae and aggregation is still unclear. In order to assess the influence of these biological agents on aggregate dynamics, we have combined a field study and a laboratory experiment. On a long term experiment trial in Brittany, SOERE PRO-EFELE, we have studied the effect of reduced tillage (vs. conventional tillage) combined to organic inputs (vs. mineral inputs) on earthworm community and soil stability. Aggregate stability was measured at different perturbations intensities: fast wetting (FW), slow wetting (SW) and mechanical breakdown (MB). This study showed that after 4 years of experiments, reduced tillage and organic inputs enhanced aggregate stability. Earthworms modulated aggregation process: endogeics reduced FW stability (mechanical binding by hyphae) and anecics increased SW stability (aggregate interparticular cohesion and hydrophobicity). Some precisions were provided by the laboratory experiment, using microcosms, which compared casts of the endogeic Aporectodea c. caliginosa (NCCT) and the anecic Lumbricus terrestris (LT). The presumed hyphae fragmentation by endogeics could not be highlight in NCCT casts. Nevertheless, hyphae were more abundant and C content and aggregate stability were higher in LT casts corroborating the positive contribution of anecics to aggregate stability.

  2. High-performance dc SQUIDs with submicrometer niobium Josephson junctions

    NASA Astrophysics Data System (ADS)

    de Waal, V. J.; Klapwijk, T. M.; van den Hamer, P.

    1983-11-01

    We report on the fabrication and performance of low-noise, all-niobium, thin-film planar dc SQUIDs with submicrometer Josephson junctions. The junctions are evaporated obliquely through a metal shadow evaporation mask, which is made using optical lithography with 0.5 µm tolerance. The Josephson junction barrier is formed by evaporating a thin silicon film and with a subsequent oxidation in a glow discharge. The junction parameters can be reproduced within a factor of two. Typical critical currents of the SQUIDs are about 3 µA and the resistances are about 100 Ω. With SQUIDs having an inductance of 1 nH the voltage modulation is at least 60 µV. An intrinsic energy resolution of 4×10-32 J/Hz has been reached. The SQUIDs are coupled to wire-wound input coils or with thin-film input coils. The thin-film input coil consists of a niobium spiral of 20 turns on a separate substrate. In both cases the coil is glued onto a 2-nH SQUID with a coupling efficiency of at least 0.5. Referred to the thin-film input coil, the best coupled energy resolution achieved is 1.2×10-30 J/Hz measured in a flux-locked loop at frequencies above 10 Hz. As far as we know, this is the best figure achieved with an all-refractory-metal thin-film SQUID. The fabrication technique used is suited for making circuits with SQUID and pickup coil on the same substrate. We describe a compact, planar, first-order gradiometer integrated with a SQUID on a single substrate. The gradient noise of this device is 3×10-12 T m-1. The gradiometer has a size of 12 mm×17 mm, is simple to fabricate, and is suitable for biomedical applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason R.; Mayo, Jackson R.

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the referencemore » circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.« less

  4. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  5. Hearing AIDS and music.

    PubMed

    Chasin, Marshall; Russo, Frank A

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.

  6. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach

    PubMed Central

    Zeng, Xiaozheng; McGough, Robert J.

    2009-01-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640

  7. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  8. Achromatical Optical Correlator

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.

  9. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  10. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  11. Femtosecond soliton source with fast and broad spectral tunability.

    PubMed

    Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E

    2009-03-15

    We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.

  12. Effect of Spatial Locality Prefetching on Structural Locality

    DTIC Science & Technology

    1991-12-01

    Pollution module calculates the SLC and CAM cache pollution percentages. And finally, the Generate Reference Frequency List module produces the output...3.2.5 Generate Reference Frequency List 3.2.6 Each program module in the structure chart is mapped into an Ada package. By performing this encapsulation...call routine to generate reference -- frequency list -- end if -- end loop -- close input, output, and reference files end Cache Simulator Figure 3.5

  13. Model-oriented review and multi-body simulation of the ossicular chain of the human middle ear.

    PubMed

    Volandri, G; Di Puccio, F; Forte, P; Manetti, S

    2012-11-01

    The ossicular chain of the human middle ear has a key role in sound conduction since it transfers vibrations from the tympanic membrane to the cochlea, connecting the outer and the inner part of the hearing organ. This study reports firstly a description of the main anatomical features of the middle ear to introduce a detailed survey of its biomechanics, focused on model development, with a collection of geometric, inertial and mechanical/material parameters. The joint issues are particularly discussed from the perspective of developing a model of the middle ear both explanatory and predictive. Such a survey underlines the remarkable dispersion of data, due also to the lack of a standardization of the experimental techniques and conditions. Subsequently, a 3D multi-body model of the ossicular chain and other structures of the middle ear is described. Such an approach is justified as the ossicles were proven to behave as rigid bodies in the human hearing range and was preferred to the more widely used finite element one as it simplifies the model development and improves joint modeling. The displacement of the umbo (a reference point of the tympanic membrane) in the 0.3-6kHz frequency range was defined as input of the model, while the stapes footplate displacement as output. A parameter identification procedure was used to find parameter values for reproducing experimental and numerical reference curves taken from the literature. This simple model might represent a valid alternative to more complex models and might provide a useful tool to simulate pathological/post-surgical/post-traumatic conditions and evaluate ossicular replacement prostheses. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.

    2014-08-01

    In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less

  15. Vastly accelerated linear least-squares fitting with numerical optimization for dual-input delay-compensated quantitative liver perfusion mapping.

    PubMed

    Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal

    2018-04-01

    To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. FAST: Fitting and Assessment of Synthetic Templates

    NASA Astrophysics Data System (ADS)

    Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis

    2018-03-01

    FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.

  17. Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.

    PubMed

    Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold

    2016-01-01

    Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.

  18. Simulations of Brady's-Type Fault Undergoing CO2 Push-Pull: Pressure-Transient and Sensitivity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Yoojin; Doughty, Christine

    Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less

  19. Neural network-based distributed attitude coordination control for spacecraft formation flying with input saturation.

    PubMed

    Zou, An-Min; Kumar, Krishna Dev

    2012-07-01

    This brief considers the attitude coordination control problem for spacecraft formation flying when only a subset of the group members has access to the common reference attitude. A quaternion-based distributed attitude coordination control scheme is proposed with consideration of the input saturation and with the aid of the sliding-mode observer, separation principle theorem, Chebyshev neural networks, smooth projection algorithm, and robust control technique. Using graph theory and a Lyapunov-based approach, it is shown that the distributed controller can guarantee the attitude of all spacecraft to converge to a common time-varying reference attitude when the reference attitude is available only to a portion of the group of spacecraft. Numerical simulations are presented to demonstrate the performance of the proposed distributed controller.

  20. Program for creating an operating system generation cross reference index (SGINDEX)

    NASA Technical Reports Server (NTRS)

    Barth, C. W.

    1972-01-01

    Computer program to collect key data from Stage Two input of OS/360 system and to prepare formatted listing of index entries collected is discussed. Program eliminates manual paging through system output by providing comprehensive cross reference.

  1. A multi-source probabilistic hazard assessment of tephra dispersal in the Neapolitan area

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Costa, Antonio; Selva, Jacopo; Folch, Arnau; Macedonio, Giovanni; Tonini, Roberto

    2015-04-01

    In this study we present the results obtained from a long-term Probabilistic Hazard Assessment (PHA) of tephra dispersal in the Neapolitan area. Usual PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping eruption sizes and possible vent positions in a limited number of classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA then results from combining simulations considering different volcanological and meteorological conditions through weights associated to their specific probability of occurrence. However, volcanological parameters (i.e., erupted mass, eruption column height, eruption duration, bulk granulometry, fraction of aggregates) typically encompass a wide range of values. Because of such a natural variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. In the present study, we use a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological input values are chosen by using a stratified sampling method. This procedure allows for quantifying hazard without relying on the definition of scenarios, thus avoiding potential biases introduced by selecting single representative scenarios. Embedding this procedure into the Bayesian Event Tree scheme enables the tephra fall PHA and its epistemic uncertainties. We have appied this scheme to analyze long-term tephra fall PHA from Vesuvius and Campi Flegrei, in a multi-source paradigm. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained show that PHA accounting for the whole natural variability are consistent with previous probabilities maps elaborated for Vesuvius and Campi Flegrei on the basis of single representative scenarios, but show significant differences. In particular, the area characterized by a 300 kg/m2-load exceedance probability larger than 5%, accounting for the whole range of variability (that is, from small violent strombolian to plinian eruptions), is similar to that displayed in the maps based on the medium magnitude reference eruption, but it is of a smaller extent. This is due to the relatively higher weight of the small magnitude eruptions considered in this study, but neglected in the reference scenario maps. On the other hand, in our new maps the area characterized by a 300 kg/m2-load exceedance probability larger than 1% is much larger than that of the medium magnitude reference eruption, due to the contribution of plinian eruptions at lower probabilities, again neglected in the reference scenario maps.

  2. Evaluation of trade influence on economic growth rate by computational intelligence approach

    NASA Astrophysics Data System (ADS)

    Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor

    2017-01-01

    In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.

  3. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  4. Performance of Solar Proxy Options of IRI-Plas Model for Equinox Seasons

    NASA Astrophysics Data System (ADS)

    Sezen, Umut; Gulyaeva, Tamara L.; Arikan, Feza

    2018-02-01

    International Reference Ionosphere (IRI) is the most acclaimed climatic model of the ionosphere. Since 2009, the range of the IRI model has been extended to the Global Positioning System (GPS) orbital height of 20,000 km in the plasmasphere. The new model, which is called IRI extended to Plasmasphere (IRI-Plas), can input not only the ionosonde foF2 and hmF2 but also the GPS-total electron content (TEC). IRI-Plas has been provided at www.ionolab.org, where online computation of ionospheric parameters is accomplished through a user-friendly interface. The solar proxies that are available in IRI-Plas can be listed as sunspot number (SSN1), SSN2, F10.7, global electron content (GEC), TEC, IG, Mg II, Lyman-α, and GEC_RZ. In this study, ionosonde foF2 data are compared with IRI-Plas foF2 values with the Consultative Committee International Radio (CCIR) and International Union of Radio Science (URSI) model choices for each solar proxy, with or without the GPS-TEC input for the equinox months of October 2011 and March 2015. It has been observed that the best fitting model choices in Root Mean Square (RMS) and Normalized RMS (NRMS) sense are the Jet Propulsion Laboratory global ionospheric maps-TEC input with Lyman-α solar proxy option for both months. The input of TEC definitely lowers the difference between the model and ionosonde foF2 values. The IG and Mg II solar proxies produce similar model foF2 values, and they usually are the second and third best fits to the ionosonde foF2 for the midlatitude ionosphere. In high-latitude regions, Jet Propulsion Laboratory global ionospheric map-TEC inputs to IRI-Plas with Lyman-α, GEC_RZ, and TEC solar proxies are the best choices. In equatorial region, the best fitting solar proxies are IG, Lyman-α, and Mg II.

  5. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection

    PubMed Central

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-01-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393

  6. Developing a Novel Parameter Estimation Method for Agent-Based Model in Immune System Simulation under the Framework of History Matching: A Case Study on Influenza A Virus Infection.

    PubMed

    Li, Tingting; Cheng, Zhengguo; Zhang, Le

    2017-12-01

    Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.

  7. Relative Water Uptake as a Criterion for the Design of Trickle Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Communar, G.; Friedman, S. P.

    2008-12-01

    Previously derived analytical solutions to the 2- and 3-dimensional water flow problems describing trickle irrigation are not being widely used in practice because those formulations either ignore root water uptake or refer to it as a known input. In this lecture we are going to describe a new modeling approach and demonstrate its applicability for designing the geometry of trickle irrigation systems, namely the spacing between the emitters and drip lines. The major difference between our and previous modeling approaches is that we refer to the root water uptake as to the unknown solution of the problem and not as to a known input. We postulate that the solution to the steady-state water flow problem with a root sink that is acting under constant, maximum suction defines un upper bound to the relative water uptake (water use efficiency) in actual transient situations and propose to use it as a design criterion. Following previous derivations of analytical solutions we assume that the soil hydraulic conductivity increases exponentially with its matric head, which allows the linearization of the Richards equation, formulated in terms of the Kirchhoff matric flux potential. Since the transformed problem is linear, the relative water uptake for any given configuration of point or line sources and sinks can be calculated by superposition of the Green's functions of all relevant water sources and sinks. In addition to evaluating the relative water uptake, we also derived analytical expressions for the steam functions. The stream lines separating the water uptake zone from the percolating water provide insight to the dependence of the shape and extent of the actual rooting zone on the source- sink geometry and soil properties. A minimal number of just 3 system parameters: Gardner's (1958) alfa as a soil type quantifier and the depth and diameter of the pre-assumed active root zone are sufficient to characterize the interplay between capillary and gravitational effects on water flow and the competition between the processes of root water uptake and percolation. For accounting also for evaporation from the soil surface, when significant, another parameter is required, adopting the solution of Lomen and Warrick (1978).

  8. Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2011-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.

  9. Simulation of wind wave growth with reference source functions

    NASA Astrophysics Data System (ADS)

    Badulin, Sergei I.; Zakharov, Vladimir E.; Pushkarev, Andrei N.

    2013-04-01

    We present results of extensive simulations of wind wave growth with the so-called reference source function in the right-hand side of the Hasselmann equation written as follows First, we use Webb's algorithm [8] for calculating the exact nonlinear transfer function Snl. Second, we consider a family of wind input functions in accordance with recent consideration [9] ( )s S = ?(k)N , ?(k) = ? ? ?- f (?). in k 0 ?0 in (2) Function fin(?) describes dependence on angle ?. Parameters in (2) are tunable and determine magnitude (parameters ?0, ?0) and wave growth rate s [9]. Exponent s plays a key role in this study being responsible for reference scenarios of wave growth: s = 4-3 gives linear growth of wave momentum, s = 2 - linear growth of wave energy and s = 8-3 - constant rate of wave action growth. Note, the values are close to ones of conventional parameterizations of wave growth rates (e.g. s = 1 for [7] and s = 2 for [5]). Dissipation function Sdiss is chosen as one providing the Phillips spectrum E(?) ~ ?5 at high frequency range [3] (parameter ?diss fixes a dissipation scale of wind waves) Sdiss = Cdissμ4w?N (k)θ(? - ?diss) (3) Here frequency-dependent wave steepness μ2w = E(?,?)?5-g2 makes this function to be heavily nonlinear and provides a remarkable property of stationary solutions at high frequencies: the dissipation coefficient Cdiss should keep certain value to provide the observed power-law tails close to the Phillips spectrum E(?) ~ ?-5. Our recent estimates [3] give Cdiss ? 2.0. The Hasselmann equation (1) with the new functions Sin, Sdiss (2,3) has a family of self-similar solutions of the same form as previously studied models [1,3,9] and proposes a solid basis for further theoretical and numerical study of wave evolution under action of all the physical mechanisms: wind input, wave dissipation and nonlinear transfer. Simulations of duration- and fetch-limited wind wave growth have been carried out within the above model setup to check its conformity with theoretical predictions, previous simulations [2,6,9], experimental parameterizations of wave spectra [1,4] and to specify tunable parameters of terms (2,3). These simulations showed realistic spatio-temporal scales of wave evolution and spectral shaping close to conventional parameterizations [e.g. 4]. An additional important feature of the numerical solutions is a saturation of frequency-dependent wave steepness μw in short-frequency range. The work was supported by the Russian government contract No.11.934.31.0035, Russian Foundation for Basic Research grant 11-05-01114-a and ONR grant N00014-10-1-0991. References [1] S. I. Badulin, A. V. Babanin, D. Resio, and V. Zakharov. Weakly turbulent laws of wind-wave growth. J. Fluid Mech., 591:339-378, 2007. [2] S. I. Badulin, A. N. Pushkarev, D. Resio, and V. E. Zakharov. Self-similarity of wind-driven seas. Nonl. Proc. Geophys., 12:891-946, 2005. [3] S. I. Badulin and V. E. Zakharov. New dissipation function for weakly turbulent wind-driven seas. ArXiv e-prints, (1212.0963), December 2012. [4] M. A. Donelan, J. Hamilton, and W. H. Hui. Directional spectra of wind-generated waves. Phil. Trans. Roy. Soc. Lond. A, 315:509-562, 1985. [5] M. A. Donelan and W. J. Pierson-jr. Radar scattering and equilibrium ranges in wind-generated waves with application to scatterometry. J. Geophys. Res., 92(C5):4971-5029, 1987. [6] E. Gagnaire-Renou, M. Benoit, and S. I. Badulin. On weakly turbulent scaling of wind sea in simulations of fetch-limited growth. J. Fluid Mech., 669:178-213, 2011. [7] R. L. Snyder, F. W. Dobson, J. A. Elliot, and R. B. Long. Array measurements of atmospheric pressure fluctuations above surface gravity waves. J. Fluid Mech., 102:1-59, 1981. [8] D. J. Webb. Non-linear transfers between sea waves. Deep Sea Res., 25:279-298, 1978. [9] V. E. Zakharov, D. Resio, and A. N. Pushkarev. New wind input term consistent with experimental, theoretical and numerical considerations. ArXiv e-prints, (1212.1069), December 2012.

  10. HEAT INPUT AND POST WELD HEAT TREATMENT EFFECTS ON REDUCED-ACTIVATION FERRITIC/MARTENSITIC STEEL FRICTION STIR WELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Wei; Chen, Gaoqiang; Chen, Jian

    Reduced-activation ferritic/martensitic (RAFM) steels are an important class of structural materials for fusion reactor internals developed in recent years because of their improved irradiation resistance. However, they can suffer from welding induced property degradations. In this paper, a solid phase joining technology friction stir welding (FSW) was adopted to join a RAFM steel Eurofer 97 and different FSW parameters/heat input were chosen to produce welds. FSW response parameters, joint microstructures and microhardness were investigated to reveal relationships among welding heat input, weld structure characterization and mechanical properties. In general, FSW heat input results in high hardness inside the stir zonemore » mostly due to a martensitic transformation. It is possible to produce friction stir welds similar to but not with exactly the same base metal hardness when using low power input because of other hardening mechanisms. Further, post weld heat treatment (PWHT) is a very effective way to reduce FSW stir zone hardness values.« less

  11. On the reliability of voltage and power as input parameters for the characterization of high power ultrasound applications

    NASA Astrophysics Data System (ADS)

    Haller, Julian; Wilkens, Volker

    2012-11-01

    For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.

  12. Thermophysical properties of hydrophobised lime plaster - Experimental analysis of moisture effect

    NASA Astrophysics Data System (ADS)

    Pavlíková, Milena; Pernicová, Radka; Pavlík, Zbyšek

    2016-07-01

    Lime plasters are the most popular finishing materials in renewal of historical buildings and culture monuments. Because of their limited durability, new materials and design solutions are investigated in order to improve plasters performance in harmful environmental conditions. For the practical use, the plasters mechanical resistivity and the compatibility with substrate are the most decisive material parameters. However, also plasters hygric and thermal parameters affecting the overall hygrothermal function of the renovated structures are of the particular importance. On this account, the effect of moisture content on the thermophysical properties of a newly designed lime plasters containing hydrophobic admixture is analysed in the paper. For the comparative purposes, the reference lime and cement-lime plasters are tested as well. Basic characterization of the tested materials is done using bulk density, matrix density, and porosity measurements. Thermal conductivity and volumetric heat capacity in the broad range of moisture content are experimentally accessed using a transient impulse method. The obtained data reveals the significant increase of the both studied thermal parameters with increasing moisture content and gives information on plasters behaviour in a highly humid environment and/or in the case of their possible direct contact with liquid water. The accessed material parameters will be stored in a material database, where can find use as an input data for computational modelling of coupled heat and moisture transport in this type of porous building materials.

  13. A water quality index model using stepwise regression and neural networks models for the Piabanha River basin in Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    Villas Boas, M. D.; Olivera, F.; Azevedo, J. S.

    2013-12-01

    The evaluation of water quality through 'indexes' is widely used in environmental sciences. There are a number of methods available for calculating water quality indexes (WQI), usually based on site-specific parameters. In Brazil, WQI were initially used in the 1970s and were adapted from the methodology developed in association with the National Science Foundation (Brown et al, 1970). Specifically, the WQI 'IQA/SCQA', developed by the Institute of Water Management of Minas Gerais (IGAM), is estimated based on nine parameters: Temperature Range, Biochemical Oxygen Demand, Fecal Coliforms, Nitrate, Phosphate, Turbidity, Dissolved Oxygen, pH and Electrical Conductivity. The goal of this study was to develop a model for calculating the IQA/SCQA, for the Piabanha River basin in the State of Rio de Janeiro (Brazil), using only the parameters measurable by a Multiparameter Water Quality Sonde (MWQS) available in the study area. These parameters are: Dissolved Oxygen, pH and Electrical Conductivity. The use of this model will allow to further the water quality monitoring network in the basin, without requiring significant increases of resources. The water quality measurement with MWQS is less expensive than the laboratory analysis required for the other parameters. The water quality data used in the study were obtained by the Geological Survey of Brazil in partnership with other public institutions (i.e. universities and environmental institutes) as part of the project "Integrated Studies in Experimental and Representative Watersheds". Two models were developed to correlate the values of the three measured parameters and the IQA/SCQA values calculated based on all nine parameters. The results were evaluated according to the following validation statistics: coefficient of determination (R2), Root Mean Square Error (RMSE), Akaike information criterion (AIC) and Final Prediction Error (FPE). The first model was a linear stepwise regression between three independent variables (input) and one dependent variable (output) to establish an equation relating input to output. This model produced the following statistics: R2 = 0.85, RMSE = 6.19, AIC =0.65 and FPE = 1.93. The second model was a Feedforward Neural Network with one tan-sigmoid hidden layer (4 neurons) and one linear output layer. The neural network was trained based on a backpropagation algorithm using the input as predictors and the output as target. The following statistics were found: R2 = 0.95, RMSE = 4.86, AIC= 0.33 and FPE = 1.39. The second model produced a better fit than the first one, having a greater R2 and smaller RMSE, AIC and FPE. The best performance of the second method can be attributed to the fact that the water quality parameters often exhibit nonlinear behaviors and neural networks are capable of representing nonlinear relationship efficiently, while the regression is limited to linear relationships. References: Brown, R.M., McLelland, N.I., Deininger, R.A., Tozer, R.G.1970. A Water Quality Index-Do we dare? Water & Sewage Works, October: 339-343.

  14. Thermophysical properties of hydrophobised lime plasters - The influence of ageing

    NASA Astrophysics Data System (ADS)

    Pavlíková, Milena; Zemanová, Lucie; Pavlík, Zbyšek

    2017-07-01

    The building envelope is a principal responsible for buildings energy loses. Lime plasters as the most popular finishing materials of historical buildings and culture monuments influence the thermal behaviour as well as construction material of masonry. On this account, the effect of ageing on the thermophysical properties of a newly designed lime plasters containing hydrophobic admixture is analysed in the paper. For the comparative purposes, the reference lime plaster is tested. The ageing is accelerated with controlled carbonation process to simulate the final plasters properties. Basic characterization of the tested materials is done using bulk density, matrix density, and porosity measurements. Thermal conductivity and volumetric heat capacity are experimentally assessed using a transient impulse method. The obtained data revealed the significant changes of the both studied thermal parameters in the dependence on plasters composition and age. The assessed material parameters will be stored in a material database, where will find use as an input data for computational modelling of heat transport in this type of porous building materials and evaluation of energy-savings and sustainability issues.

  15. Inferring community properties of benthic macroinvertebrates in streams using Shannon index and exergy

    NASA Astrophysics Data System (ADS)

    Nguyen, Tuyen Van; Cho, Woon-Seok; Kim, Hungsoo; Jung, Il Hyo; Kim, YongKuk; Chon, Tae-Soo

    2014-03-01

    Definition of ecological integrity based on community analysis has long been a critical issue in risk assessment for sustainable ecosystem management. In this work, two indices (i.e., Shannon index and exergy) were selected for the analysis of community properties of benthic macroinvertebrate community in streams in Korea. For this purpose, the means and variances of both indices were analyzed. The results found an extra scope of structural and functional properties in communities in response to environmental variabilities and anthropogenic disturbances. The combination of these two parameters (four indices) was feasible in identification of disturbance agents (e.g., industrial pollution or organic pollution) and specifying states of communities. The four-aforementioned parameters (means and variances of Shannon index and exergy) were further used as input data in a self-organizing map for the characterization of water quality. Our results suggested that Shannon index and exergy in combination could be utilized as a suitable reference system and would be an efficient tool for assessment of the health of aquatic ecosystems exposed to environmental disturbances.

  16. Comparison of subpixel image registration algorithms

    NASA Astrophysics Data System (ADS)

    Boye, R. R.; Nelson, C. L.

    2009-02-01

    Research into the use of multiframe superresolution has led to the development of algorithms for providing images with enhanced resolution using several lower resolution copies. An integral component of these algorithms is the determination of the registration of each of the low resolution images to a reference image. Without this information, no resolution enhancement can be attained. We have endeavored to find a suitable method for registering severely undersampled images by comparing several approaches. To test the algorithms, an ideal image is input to a simulated image formation program, creating several undersampled images with known geometric transformations. The registration algorithms are then applied to the set of low resolution images and the estimated registration parameters compared to the actual values. This investigation is limited to monochromatic images (extension to color images is not difficult) and only considers global geometric transformations. Each registration approach will be reviewed and evaluated with respect to the accuracy of the estimated registration parameters as well as the computational complexity required. In addition, the effects of image content, specifically spatial frequency content, as well as the immunity of the registration algorithms to noise will be discussed.

  17. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  18. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    PubMed

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  19. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  20. Differential Geometry Applied To Least-Square Error Surface Approximations

    NASA Astrophysics Data System (ADS)

    Bolle, Ruud M.; Sabbah, Daniel

    1987-08-01

    This paper focuses on extraction of the parameters of individual surfaces from noisy depth maps. The basis for this are least-square error polynomial approximations to the range data and the curvature properties that can be computed from these approximations. The curvature properties are derived using the invariants of the Weingarten Map evaluated at the origin of local coordinate systems centered at the range points. The Weingarten Map is a well-known concept in differential geometry; a brief treatment of the differential geometry pertinent to surface curvature is given. We use the curvature properties for extracting certain surface parameters from the curvature properties of the approximations. Then we show that curvature properties alone are not enough to obtain all the parameters of the surfaces; higher order properties (information about change of curvature) are needed to obtain full parametric descriptions. This surface parameter estimation problem arises in the design of a vision system to recognize 3D objects whose surfaces are composed of planar patches and patches of quadrics of revolution. (Quadrics of revolution are quadrics that are surfaces of revolution.) A significant portion of man-made objects can be modeled using these surfaces. The actual process of recognition and parameter extraction is framed as a set of stacked parameter space transforms. The transforms are "stacked" in the sense that any one transform computes only a partial geometric description that forms the input to the next transform. Those who are interested in the organization and control of the recognition and parameter recognition process are referred to [Sabbah86], this paper briefly touches upon the organization, but concentrates mainly on geometrical aspects of the parameter extraction.

  1. Non-linear control of the output stage of a solar microinverter

    NASA Astrophysics Data System (ADS)

    Lopez-Santos, Oswaldo; Garcia, Germain; Martinez-Salamero, Luis; Avila-Martinez, Juan C.; Seguier, Lionel

    2017-01-01

    This paper presents a proposal to control the output stage of a two-stage solar microinverter to inject real power into the grid. The input stage of the microinverter is used to extract the maximum available power of a photovoltaic module enforcing a power source behavior in the DC-link to feed the output stage. The work here reported is devoted to control a grid-connected power source inverter with a high power quality level at the grid side ensuring the power balance of the microinverter regulating the voltage of the DC-link. The proposed control is composed of a sinusoidal current reference generator and a cascade type controller composed by a current tracking loop and a voltage regulation loop. The current reference is obtained using a synchronized generator based on phase locked loop (PLL) which gives the shape, the frequency and phase of the current signal. The amplitude of the reference is obtained from a simple controller regulating the DC-link voltage. The tracking of the current reference is accomplished by means of a first-order sliding mode control law. The solution takes advantage of the rapidity and inherent robustness of the sliding mode current controller allowing a robust behavior in the regulation of the DC-link using a simple linear controller. The analytical expression to determine the power quality indicators of the micro-inverter's output is theoretically solved giving expressions relating the converter parameters. The theoretical approach is validated using simulation and experimental results.

  2. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  3. Gaussian beam profile shaping apparatus, method therefor and evaluation thereof

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.; Romero, Louis A.

    1999-01-01

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system.

  4. Gaussian beam profile shaping apparatus, method therefore and evaluation thereof

    DOEpatents

    Dickey, F.M.; Holswade, S.C.; Romero, L.A.

    1999-01-26

    A method and apparatus maps a Gaussian beam into a beam with a uniform irradiance profile by exploiting the Fourier transform properties of lenses. A phase element imparts a design phase onto an input beam and the output optical field from a lens is then the Fourier transform of the input beam and the phase function from the phase element. The phase element is selected in accordance with a dimensionless parameter which is dependent upon the radius of the incoming beam, the desired spot shape, the focal length of the lens and the wavelength of the input beam. This dimensionless parameter can also be used to evaluate the quality of a system. In order to control the radius of the incoming beam, optics such as a telescope can be employed. The size of the target spot and the focal length can be altered by exchanging the transform lens, but the dimensionless parameter will remain the same. The quality of the system, and hence the value of the dimensionless parameter, can be altered by exchanging the phase element. The dimensionless parameter provides design guidance, system evaluation, and indication as to how to improve a given system. 27 figs.

  5. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    NASA Astrophysics Data System (ADS)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  6. Parameters Selection for Bivariate Multiscale Entropy Analysis of Postural Fluctuations in Fallers and Non-Fallers Older Adults.

    PubMed

    Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert

    2016-08-01

    Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.

  7. Closed-loop control of renal perfusion pressure in physiological experiments.

    PubMed

    Campos-Delgado, D U; Bonilla, I; Rodríguez-Martínez, M; Sánchez-Briones, M E; Ruiz-Hernández, E

    2013-07-01

    This paper presents the design, experimental modeling, and control of a pump-driven renal perfusion pressure (RPP)-regulatory system to implement precise and relatively fast RPP regulation in rats. The mechatronic system is a simple, low-cost, and reliable device to automate the RPP regulation process based on flow-mediated occlusion. Hence, the regulated signal is the RPP measured in the left femoral artery of the rat, and the manipulated variable is the voltage applied to a dc motor that controls the occlusion of the aorta. The control system is implemented in a PC through the LabView software, and a data acquisition board NI USB-6210. A simple first-order linear system is proposed to approximate the dynamics in the experiment. The parameters of the model are chosen to minimize the error between the predicted and experimental output averaged from eight input/output datasets at different RPP operating conditions. A closed-loop servocontrol system based on a pole-placement PD controller plus dead-zone compensation was proposed for this purpose. First, the feedback structure was validated in simulation by considering parameter uncertainty, and constant and time-varying references. Several experimental tests were also conducted to validate in real time the closed-loop performance for stepwise and fast switching references, and the results show the effectiveness of the proposed automatic system to regulate the RPP in the rat, in a precise, accurate (mean error less than 2 mmHg) and relatively fast mode (10-15 s of response time).

  8. RF digital-to-analog converter

    DOEpatents

    Conway, Patrick H.; Yu, David U. L.

    1995-01-01

    A digital-to analogue converter for producing an RF output signal proportional to a digital input word of N bits from an RF reference input, N being an integer greater or equal to 2. The converter comprises a plurality of power splitters, power combiners and a plurality of mixers or RF switches connected in a predetermined configuration.

  9. Testing an Instructional Model in a University Educational Setting from the Student's Perspective

    ERIC Educational Resources Information Center

    Betoret, Fernando Domenech

    2006-01-01

    We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…

  10. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    ERIC Educational Resources Information Center

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  11. Constant Switching Frequency DTC for Matrix Converter Fed Speed Sensorless Induction Motor Drive

    NASA Astrophysics Data System (ADS)

    Mir, Tabish Nazir; Singh, Bhim; Bhat, Abdul Hamid

    2018-05-01

    The paper presents a constant switching frequency scheme for speed sensorless Direct Torque Control (DTC) of Matrix Converter fed Induction Motor Drive. The use of matrix converter facilitates improved power quality on input as well as motor side, along with Input Power Factor control, besides eliminating the need for heavy passive elements. Moreover, DTC through Space Vector Modulation helps in achieving a fast control over the torque and flux of the motor, with added benefit of constant switching frequency. A constant switching frequency aids in maintaining desired power quality of AC mains current even at low motor speeds, and simplifies input filter design of the matrix converter, as compared to conventional hysteresis based DTC. Further, stator voltage estimation from sensed input voltage, and subsequent stator (and rotor) flux estimation is done. For speed sensorless operation, a Model Reference Adaptive System is used, which emulates the speed dependent rotor flux equations of the induction motor. The error between conventionally estimated rotor flux (reference model) and the rotor flux estimated through the adaptive observer is processed through PI controller to generate the rotor speed estimate.

  12. Approximate circuits for increased reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the referencemore » circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.« less

  13. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  14. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).

  15. Mars Reconnaissance Orbiter Uplink Analysis Tool

    NASA Technical Reports Server (NTRS)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; Hwang, Pauline

    2008-01-01

    This software analyzes Mars Reconnaissance Orbiter (MRO) orbital geometry with respect to Mars Exploration Rover (MER) contact windows, and is the first tool of its kind designed specifically to support MRO-MER interface coordination. Prior to this automated tool, this analysis was done manually with Excel and the UNIX command line. In total, the process would take approximately 30 minutes for each analysis. The current automated analysis takes less than 30 seconds. This tool resides on the flight machine and uses a PHP interface that does the entire analysis of the input files and takes into account one-way light time from another input file. Input flies are copied over to the proper directories and are dynamically read into the tool s interface. The user can then choose the corresponding input files based on the time frame desired for analysis. After submission of the Web form, the tool merges the two files into a single, time-ordered listing of events for both spacecraft. The times are converted to the same reference time (Earth Transmit Time) by reading in a light time file and performing the calculations necessary to shift the time formats. The program also has the ability to vary the size of the keep-out window on the main page of the analysis tool by inputting a custom time for padding each MRO event time. The parameters on the form are read in and passed to the second page for analysis. Everything is fully coded in PHP and can be accessed by anyone with access to the machine via Web page. This uplink tool will continue to be used for the duration of the MER mission's needs for X-band uplinks. Future missions also can use the tools to check overflight times as well as potential site observation times. Adaptation of the input files to the proper format, and the window keep-out times, would allow for other analyses. Any operations task that uses the idea of keep-out windows will have a use for this program.

  16. The effect of word prediction settings (frequency of use) on text input speed in persons with cervical spinal cord injury: a prospective study.

    PubMed

    Pouplin, Samuel; Roche, Nicolas; Antoine, Jean-Yves; Vaugier, Isabelle; Pottier, Sandra; Figere, Marjorie; Bensmail, Djamel

    2017-06-01

    To determine whether activation of the frequency of use and automatic learning parameters of word prediction software has an impact on text input speed. Forty-five participants with cervical spinal cord injury between C4 and C8 Asia A or B accepted to participate to this study. Participants were separated in two groups: a high lesion group for participants with lesion level is at or above C5 Asia AIS A or B and a low lesion group for participants with lesion is between C6 and C8 Asia AIS A or B. A single evaluation session was carried out for each participant. Text input speed was evaluated during three copying tasks: • without word prediction software (WITHOUT condition) • with automatic learning of words and frequency of use deactivated (NOT_ACTIV condition) • with automatic learning of words and frequency of use activated (ACTIV condition) Results: Text input speed was significantly higher in the WITHOUT than the NOT_ACTIV (p< 0.001) or ACTIV conditions (p = 0.02) for participants with low lesions. Text input speed was significantly higher in the ACTIV than in the NOT_ACTIV (p = 0.002) or WITHOUT (p < 0.001) conditions for participants with high lesions. Use of word prediction software with the activation of frequency of use and automatic learning increased text input speed in participants with high-level tetraplegia. For participants with low-level tetraplegia, the use of word prediction software with frequency of use and automatic learning activated only decreased the number of errors. Implications in rehabilitation   Access to technology can be difficult for persons with disabilities such as cervical spinal cord injury (SCI). Several methods have been developed to increase text input speed such as word prediction software.This study show that parameter of word prediction software (frequency of use) affected text input speed in persons with cervical SCI and differed according to the level of the lesion. • For persons with high-level lesion, our results suggest that this parameter must be activated so that text input speed is increased. • For persons with low lesion group, this parameter must be activated so that the numbers of errors are decreased. • In all cases, the activation of the parameter of frequency of use is essential in order to improve the efficiency of the word prediction software. • Health-related professionals should use these results in their clinical practice for better results and therefore better patients 'satisfaction.

  17. Numerical Modeling of Medium Term Morphological Changes at Manavgat River Mouth Due to Combined Action of Waves and River Discharges

    NASA Astrophysics Data System (ADS)

    Demirci, E.; Baykal, C.; Guler, I.

    2016-12-01

    In this study, hydrodynamic conditions due to river discharge, wave action and sea level fluctuations within a seven month period and the morphological response of the Manavgat river mouth are modeled with XBeach, a two-dimensional depth-averaged (2DH) numerical model developed to compute the natural coastal response during time-varying storm and hurricane conditions (Roelvink et al., 2010). The study area shows an active behavior on its nearshore morphology, thus, two jetties were constructed at the river mouth between years 1996-2000. Recently, Demirci et al. (2016) has studied the impacts of an excess river discharge and concurrent wave action and tidal fluctuations on the Manavgat river mouth morphology for the duration of 12 days (December 4th and 15th, 1998) while the construction of jetties were carried on. It is concluded that XBeach has presumed the final morphology fairly well with the calibrated set of input parameters. Here, the river mouth modeled at a further past date before the construction of jetties with the similar set of input parameters (between August 1st, 1995-March 8th, 1996) to reveal the drastic morphologic change near the mouth due to high river discharge and severe storms happened in a longer period of time. Wave climate effect is determined with the wave hindcasting model, W61, developed by Middle East Technical University-OERC with the NCEP-CFSR wind data as well as the sea level data. River discharge, wave and sea level data are introduced as input parameters in the XBeach numerical model and the final output morphological change is compared with the final bed level measurements. References:Demirci, E., Baykal, C., Guler, I., Ergin, A., & Sogut, E. (postponed). Numerical Modelling on Hydrodynamic Flow Conditions and Morphological Changes Using XBeach Near Manavgat River Mouth. Accepted as Oral presentation at the 35thInt. Conf. on Coastal Eng., Istanbul, Turkey. Guler, I., Ergin, A., Yalçıner, A. C., (2003). Monitoring Sediment Transport Processes at Manavgat River Mouth, Antalya Turkey. COPEDEC VI, 2003, Colombo, Sri Lanka Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de Vries, J., Lescinski, J. and McCall, R., (2010). XBeach Model Description and Manual. Unesco-IHE Institute for Water Education, Deltares and Delft Univ. of Technology. Report June, 21, 2010 version 6.

  18. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  19. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  20. Reservoir computing with a single time-delay autonomous Boolean node

    NASA Astrophysics Data System (ADS)

    Haynes, Nicholas D.; Soriano, Miguel C.; Rosin, David P.; Fischer, Ingo; Gauthier, Daniel J.

    2015-02-01

    We demonstrate reservoir computing with a physical system using a single autonomous Boolean logic element with time-delay feedback. The system generates a chaotic transient with a window of consistency lasting between 30 and 300 ns, which we show is sufficient for reservoir computing. We then characterize the dependence of computational performance on system parameters to find the best operating point of the reservoir. When the best parameters are chosen, the reservoir is able to classify short input patterns with performance that decreases over time. In particular, we show that four distinct input patterns can be classified for 70 ns, even though the inputs are only provided to the reservoir for 7.5 ns.

  1. Evaluation of Clear Sky Models for Satellite-Based Irradiance Estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Manajit; Gotseff, Peter

    2013-12-01

    This report describes an intercomparison of three popular broadband clear sky solar irradiance model results with measured data, as well as satellite-based model clear sky results compared to measured clear sky data. The authors conclude that one of the popular clear sky models (the Bird clear sky model developed by Richard Bird and Roland Hulstrom) could serve as a more accurate replacement for current satellite-model clear sky estimations. Additionally, the analysis of the model results with respect to model input parameters indicates that rather than climatological, annual, or monthly mean input data, higher-time-resolution input parameters improve the general clear skymore » model performance.« less

  2. Computer program for single input-output, single-loop feedback systems

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.

  3. A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Quigley, Patricia Allison

    Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.

  4. A cross-linguistic investigation of the acquisition of the pragmatics of indefinite and definite reference in two-year-olds.

    PubMed

    Rozendaal, Margot Isabella; Baker, Anne Edith

    2008-11-01

    The acquisition of reference involves both morphosyntax and pragmatics. This study investigates whether Dutch, English and French two- to three-year-old children differentiate in their use of determiners between non-specific/specific reference, newness/givenness in discourse and mutual/no mutual knowledge between interlocutors. A brief analysis of the input shows a clear association between form and function, although there are some language differences in this respect. As soon as determiner use can be statistically analyzed, the children show a relatively adult-like pattern of association for the distinctions of non-specific/specific and newness/givenness. The distinction between mutual/no mutual knowledge appears later. Reference involving no mutual knowledge is scarcely evidenced in the input and barely used by the children at this age. The development of associations is clearly related to the rate of determiner development, the French being quickest, then the English, then the Dutch.

  5. Natural background levels and threshold values for groundwater in fluvial Pleistocene and Tertiary marine aquifers in Flanders, Belgium

    NASA Astrophysics Data System (ADS)

    Coetsiers, Marleen; Blaser, Petra; Martens, Kristine; Walraevens, Kristine

    2009-05-01

    Aquifers from the same typology can have strongly different groundwater chemistry. Deducing the groundwater quality of less well-characterized aquifers from well-documented aquifers belonging to the same typology should be done with great reserve, and can only be considered as a preliminary approach. In the EU’s 6th FP BRIDGE project “Background cRiteria for the IDentification of Groundwater thrEsholds”, a methodology for the derivation of threshold values (TV) for groundwater bodies is proposed. This methodology is tested on four aquifers in Flanders of the sand and gravel typology. The methodology works well for all but the Ledo-Paniselian aquifer, where the subdivision into a fresh and saline part is disproved, as a gradual natural transition from fresh to saline conditions in the aquifer is observed. The 90 percentile is proposed as natural background level (NBL) for the unconfined Pleistocene deposits, ascribing the outliers to possible influence of pollution. For the Tertiary aquifers, high values for different parameters have a natural origin and the 97.7 percentile is preferred as NBL. The methodology leads to high TVs for parameters presenting low NBL, when compared to the standard used as a reference. This would allow for substantial anthropogenic inputs of these parameters.

  6. A Non-Invasive Assessment of Cardiopulmonary Hemodynamics with MRI in Pulmonary Hypertension

    PubMed Central

    Bane, Octavia; Shah, Sanjiv J.; Cuttica, Michael J.; Collins, Jeremy D.; Selvaraj, Senthil; Chatterjee, Neil R.; Guetter, Christoph; Carr, James C.; Carroll, Timothy J.

    2015-01-01

    Purpose We propose a method for non-invasive quantification of hemodynamic changes in the pulmonary arteries resulting from pulmonary hypertension (PH). Methods Using a two-element windkessel model, and input parameters derived from standard MRI evaluation of flow, cardiac function and valvular motion, we derive: pulmonary artery compliance (C), mean pulmonary artery pressure (mPAP), pulmonary vascular resistance (PVR), pulmonary capillary wedge pressure (PCWP), time-averaged intra-pulmonary pressure waveforms and pulmonary artery pressures (systolic (sPAP) and diastolic (dPAP)). MRI results were compared directly to reference standard values from right heart catheterization (RHC) obtained in a series of patients with suspected pulmonary hypertension (PH). Results In 7 patients with suspected PH undergoing RHC, MRI and echocardiography, there was no statistically significant difference (p<0.05) between parameters measured by MRI and RHC. Using standard clinical cutoffs to define PH (mPAP ≥ 25 mmHg), MRI was able to correctly identify all patients as having pulmonary hypertension, and to correctly distinguish between pulmonary arterial (mPAP≥ 25 mmHg, PCWP<15 mmHg) and venous hypertension (mPAP ≥ 25 mmHg, PCWP ≥ 15 mmHg) in 5 of 7 cases. Conclusions We have developed a mathematical model capable of quantifying physiological parameters that reflect the severity of PH. PMID:26283577

  7. Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm.

    PubMed

    Tan, Xia; Ji, Zhong; Zhang, Yadan

    2018-04-25

    Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.

  8. [Tips for taking history of pain].

    PubMed

    Noda, Kazutaka; Ikusaka, Masatomi

    2012-11-01

    Pain is physiologically classified as nociceptive pain, neuropathic pain, and psychogenic pain. Nociceptive pain is further divided into visceral pain, somatic pain, and referred pain. Visceral pain is dull, and it is difficult to locate the origin of such pain. Somatic pain is sharp, severe, and well localized. On receiving visceral input for pain, it affects somatic nerve inputting to the same spinal segments, then referred pain is felt in the skin and muscles supplied by it. Referred pain is felt in an area that is located at a distance from its cause. History taking is the most important factor for determining the cause of pain. Generally, all the necessary information regarding pain can be acquired if pain-related history is obtained using the "OPQRST" mnemonic, that is, onset, provocation/palliative factor, quality, region/radiation/related symptoms, severity, and time characteristics.

  9. Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio

    The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.

  10. Support vector machines-based modelling of seismic liquefaction potential

    NASA Astrophysics Data System (ADS)

    Pal, Mahesh

    2006-08-01

    This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.

  11. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  12. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  13. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  14. Decentralized Estimation and Vision-based Guidance of Fast Autonomous Systems with Guaranteed Performance in Uncertain Environments

    DTIC Science & Technology

    2013-04-22

    Following for Unmanned Aerial Vehicles Using L1 Adaptive Augmentation of Commercial Autopilots, Journal of Guidance, Control, and Dynamics, (3 2010): 0...Naira Hovakimyan. L1 Adaptive Controller for MIMO system with Unmatched Uncertainties using Modi?ed Piecewise Constant Adaptation Law, IEEE 51st...adaptive input nominal input with  Nominal input L1 ‐based control generator  This L1 adaptive control architecture uses data from the reference model

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Zongrui; Stocks, George Malcolm

    The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less

  16. System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot

    NASA Technical Reports Server (NTRS)

    Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)

    2015-01-01

    A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.

  17. Particle parameter analyzing system. [x-y plotter circuits and display

    NASA Technical Reports Server (NTRS)

    Hansen, D. O.; Roy, N. L. (Inventor)

    1969-01-01

    An X-Y plotter circuit apparatus is described which displays an input pulse representing particle parameter information, that would ordinarily appear on the screen of an oscilloscope as a rectangular pulse, as a single dot positioned on the screen where the upper right hand corner of the input pulse would have appeared. If another event occurs, and it is desired to display this event, the apparatus is provided to replace the dot with a short horizontal line.

  18. LC-oscillator with automatic stabilized amplitude via bias current control. [power supply circuit for transducers

    NASA Technical Reports Server (NTRS)

    Hamlet, J. F. (Inventor)

    1974-01-01

    A stable excitation supply for measurement transducers is described. It consists of a single-transistor oscillator with a coil connected to the collector and a capacitor connected from the collector to the emitter. The output of the oscillator is rectified and the rectified signal acts as one input to a differential amplifier; the other input being a reference potential. The output of the amplifier is connected at a point between the emitter of the transistor and ground. When the rectified signal is greater than the reference signal, the differential amplifier produces a signal of polarity to reduce bias current and, consequently, amplification.

  19. Hearing Aids and Music

    PubMed Central

    Chasin, Marshall; Russo, Frank A.

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032

  20. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  1. Multichannel Phase and Power Detector

    NASA Technical Reports Server (NTRS)

    Li, Samuel; Lux, James; McMaster, Robert; Boas, Amy

    2006-01-01

    An electronic signal-processing system determines the phases of input signals arriving in multiple channels, relative to the phase of a reference signal with which the input signals are known to be coherent in both phase and frequency. The system also gives an estimate of the power levels of the input signals. A prototype of the system has four input channels that handle signals at a frequency of 9.5 MHz, but the basic principles of design and operation are extensible to other signal frequencies and greater numbers of channels. The prototype system consists mostly of three parts: An analog-to-digital-converter (ADC) board, which coherently digitizes the input signals in synchronism with the reference signal and performs some simple processing; A digital signal processor (DSP) in the form of a field-programmable gate array (FPGA) board, which performs most of the phase- and power-measurement computations on the digital samples generated by the ADC board; and A carrier board, which allows a personal computer to retrieve the phase and power data. The DSP contains four independent phase-only tracking loops, each of which tracks the phase of one of the preprocessed input signals relative to that of the reference signal (see figure). The phase values computed by these loops are averaged over intervals, the length of which is chosen to obtain output from the DSP at a desired rate. In addition, a simple sum of squares is computed for each channel as an estimate of the power of the signal in that channel. The relative phases and the power level estimates computed by the DSP could be used for diverse purposes in different settings. For example, if the input signals come from different elements of a phased-array antenna, the phases could be used as indications of the direction of arrival of a received signal and/or as feedback for electronic or mechanical beam steering. The power levels could be used as feedback for automatic gain control in preprocessing of incoming signals. For another example, the system could be used to measure the phases and power levels of outputs of multiple power amplifiers to enable adjustment of the amplifiers for optimal power combining.

  2. SRB Data and Information

    Atmospheric Science Data Center

    2017-01-13

    ... grid. Model inputs of cloud amounts and other atmospheric state parameters are also available in some of the data sets. Primary inputs to ... Analysis (SMOBA), an assimilation product from NOAA's Climate Prediction Center. SRB products are reformatted for the use of ...

  3. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  4. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  5. Flexible Method for Inter-object Communication in C++

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Gould, Jack J.

    1994-01-01

    A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.

  6. Non-intrusive parameter identification procedure user's guide

    NASA Technical Reports Server (NTRS)

    Hanson, G. D.; Jewell, W. F.

    1983-01-01

    Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.

  7. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  8. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  9. Eight-Channel Continuous Timer

    NASA Technical Reports Server (NTRS)

    Cole, Steven

    2004-01-01

    A custom laboratory electronic timer circuit measures the durations of successive cycles of nominally highly stable input clock signals in as many as eight channels, for the purpose of statistically quantifying the small instabilities of these signals. The measurement data generated by this timer are sent to a personal computer running software that integrates the measurements to form a phase residual for each channel and uses the phase residuals to compute Allan variances for each channel. (The Allan variance is a standard statistical measure of instability of a clock signal.) Like other laboratory clock-cycle-measuring circuits, this timer utilizes an externally generated reference clock signal having a known frequency (100 MHz) much higher than the frequencies of the input clock signals (between 100 and 120 Hz). It counts the number of reference-clock cycles that occur between successive rising edges of each input clock signal of interest, thereby affording a measurement of the input clock-signal period to within the duration (10 ns) of one reference clock cycle. Unlike typical prior laboratory clock-cycle-measuring circuits, this timer does not skip some cycles of the input clock signals. The non-cycle-skipping feature is an important advantage because in applications that involve integration of measurements over long times for characterizing nominally highly stable clock signals, skipping cycles can degrade accuracy. The timer includes a field-programmable gate array that functions as a 20-bit counter running at the reference clock rate of 100 MHz. The timer also includes eight 20-bit latching circuits - one for each channel - at the output terminals of the counter. Each transition of an input signal from low to high causes the corresponding latching circuit to latch the count at that instant. Each such transition also sets a status flip-flop circuit to indicate the presence of the latched count. A microcontroller reads the values of all eight status flipflops and then reads the latched count for each channel for which the flip-flop indicates the presence of a count. Reading the count for each channel automatically causes the flipflop of that channel to be reset. The microcontroller places the counts in time order, identifies the channel number for each count, and transmits these data to the personal computer.

  10. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  11. Evaluation of FEM engineering parameters from insitu tests

    DOT National Transportation Integrated Search

    2001-12-01

    The study looked critically at insitu test methods (SPT, CPT, DMT, and PMT) as a means for developing finite element constitutive model input parameters. The first phase of the study examined insitu test derived parameters with laboratory triaxial te...

  12. Regionalization of post-processed ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-05-01

    For many years, meteorological models have been run with perturbated initial conditions or parameters to produce ensemble forecasts that are used as a proxy of the uncertainty of the forecasts. However, the ensembles are usually both biased (the mean is systematically too high or too low, compared with the observed weather), and has dispersion errors (the ensemble variance indicates a too low or too high confidence in the forecast, compared with the observed weather). The ensembles are therefore commonly post-processed to correct for these shortcomings. Here we look at one of these techniques, referred to as Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). Originally, the post-processing parameters were identified as a fixed set of parameters for a region. The application of our work is the European Flood Awareness System (http://www.efas.eu), where a distributed model is run with meteorological ensembles as input. We are therefore dealing with a considerably larger data set than previous analyses. We also want to regionalize the parameters themselves for other locations than the calibration gauges. The post-processing parameters are therefore estimated for each calibration station, but with a spatial penalty for deviations from neighbouring stations, depending on the expected semivariance between the calibration catchment and these stations. The estimated post-processed parameters can then be used for regionalization of the postprocessing parameters also for uncalibrated locations using top-kriging in the rtop-package (Skøien et al., 2006, 2014). We will show results from cross-validation of the methodology and although our interest is mainly in identifying exceedance probabilities for certain return levels, we will also show how the rtop package can be used for creating a set of post-processed ensembles through simulations.

  13. A robust momentum management and attitude control system for the space station

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.; Rhee, Ihnseok

    1991-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  14. Enhancement of CFD validation exercise along the roof profile of a low-rise building

    NASA Astrophysics Data System (ADS)

    Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.

    2018-04-01

    The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.

  15. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  16. On the fusion of tuning parameters of fuzzy rules and neural network

    NASA Astrophysics Data System (ADS)

    Mamuda, Mamman; Sathasivam, Saratha

    2017-08-01

    Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.

  17. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  18. Effect of input signal and filter parameters on patterning effect in a semiconductor optical amplifier

    NASA Astrophysics Data System (ADS)

    Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta

    2013-11-01

    A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.

  19. Robust momentum management and attitude control system for the Space Station

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1992-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  20. Adenosine 2A receptor occupancy by tozadenant and preladenant in rhesus monkeys.

    PubMed

    Barret, Olivier; Hannestad, Jonas; Alagille, David; Vala, Christine; Tavares, Adriana; Papin, Caroline; Morley, Thomas; Fowles, Krista; Lee, Hsiaoju; Seibyl, John; Tytgat, Dominique; Laruelle, Marc; Tamagnan, Gilles

    2014-10-01

    Motor symptoms in Parkinson disease (PD) are caused by a loss of dopamine input from the substantia nigra to the striatum. Blockade of adenosine 2A (A(2A)) receptors facilitates dopamine D(2) receptor function. In phase 2 clinical trials, A(2A) antagonists (istradefylline, preladenant, and tozadenant) improved motor function in PD. We developed a new A(2A) PET radiotracer, (18)F-MNI-444, and used it to investigate the relationship between plasma levels and A(2A) occupancy by preladenant and tozadenant in nonhuman primates (NHP). A series of 20 PET experiments was conducted in 5 adult rhesus macaques. PET data were analyzed with both plasma-input (Logan graphical analysis) and reference-region-based (simplified reference tissue model and noninvasive Logan graphical analysis) methods. Whole-body PET images were acquired for radiation dosimetry estimates. Human pharmacokinetic parameters for tozadenant and preladenant were used to predict A(2A) occupancy in humans, based on median effective concentration (EC(50)) values estimated from the NHP PET measurements. (18)F-MNI-444 regional uptake was consistent with A(2A) receptor distribution in the brain. Selectivity was demonstrated by dose-dependent blocking by tozadenant and preladenant. The specific-to-nonspecific ratio was superior to that of other A(2A) PET radiotracers. Pharmacokinetic modeling predicted that tozadenant and preladenant may have different profiles of A(2A) receptor occupancy in humans. (18)F-MNI-444 appears to be a better PET radiotracer for A(2A) imaging than currently available radiotracers. Assuming that EC(50) in humans is similar to that in NHP, it appears that tozadenant will provide a more sustained A(2A) receptor occupancy than preladenant in humans at clinically tested doses. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. R.; Carmichael, J. R.; Gebhart, T. E.

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  2. Gas Gun Model and Comparison to Experimental Performance of Pipe Guns Operating with Light Propellant Gases and Large Cryogenic Pellets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S. K.; Reed, J. R.; Lyttle, M. S.

    2016-01-01

    Injection of multiple large (~10 to 30 mm diameter) shattered pellets into ITER plasmas is presently part of the scheme planned to mitigate the deleterious effects of disruptions on the vessel components. To help in the design and optimize performance of the pellet injectors for this application, a model referred to as “the gas gun simulator” has been developed and benchmarked against experimental data. The computer code simulator is a Java program that models the gas-dynamics characteristics of a single-stage gas gun. Following a stepwise approach, the code utilizes a variety of input parameters to incrementally simulate and analyze themore » dynamics of the gun as the projectile is launched down the barrel. Using input data, the model can calculate gun performance based on physical characteristics, such as propellant-gas and fast-valve properties, barrel geometry, and pellet mass. Although the model is fundamentally generic, the present version is configured to accommodate cryogenic pellets composed of H2, D2, Ne, Ar, and mixtures of them and light propellant gases (H2, D2, and He). The pellets are solidified in situ in pipe guns that consist of stainless steel tubes and fast-acting valves that provide the propellant gas for pellet acceleration (to speeds ~200 to 700 m/s). The pellet speed is the key parameter in determining the response time of a shattered pellet system to a plasma disruption event. The calculated speeds from the code simulations of experiments were typically in excellent agreement with the measured values. With the gas gun simulator validated for many test shots and over a wide range of physical and operating parameters, it is a valuable tool for optimization of the injector design, including the fast valve design (orifice size and volume) for any operating pressure (~40 bar expected for the ITER application) and barrel length for any pellet size (mass, diameter, and length). Key design parameters and proposed values for the pellet injectors for the ITER disruption mitigation systems are discussed.« less

  3. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    PubMed

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Adaptive envelope protection methods for aircraft

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Suraj

    Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control/command inputs to track a limit parameter safe-response profile near the limit boundary. The method is evaluated using software-in-the-loop and flight evaluations on the Georgia Tech unmanned rotorcraft platform---GTMax. The thesis also develops and evaluates an extension for calculating control margins based on restricting limit parameter response aggressiveness near the limit boundary.

  5. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  6. Automated forward mechanical modeling of wrinkle ridges on Mars

    NASA Astrophysics Data System (ADS)

    Nahm, Amanda; Peterson, Samuel

    2016-04-01

    One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.

  7. RF digital-to-analog converter

    DOEpatents

    Conway, P.H.; Yu, D.U.L.

    1995-02-28

    A digital-to-analog converter is disclosed for producing an RF output signal proportional to a digital input word of N bits from an RF reference input, N being an integer greater or equal to 2. The converter comprises a plurality of power splitters, power combiners and a plurality of mixers or RF switches connected in a predetermined configuration. 18 figs.

  8. 75 FR 72956 - Approval and Promulgation of Air Quality Implementation Plans; Indiana; Clean Air Interstate Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-29

    ... ``biomass'' was added so that cogeneration units could exclude biomass energy input in efficiency... the cogeneration unit definition to exclude energy input from biomass. At 326 IAC 24-1-2 (8), 326 IAC... ``Biomass'' in Reference to ``Cogeneration Unit'' H. The State's Complete CAIR Regulations I. NO X Reduction...

  9. The Processing Behaviours of Adult Second Language Learners and Their Relationship to Second Language Proficiency.

    ERIC Educational Resources Information Center

    Mangubhai, Francis

    1991-01-01

    Investigated the behaviors for processing language input demonstrated by five adults beginning to learn Hindi as a second language through the Total Physical Response method. The study suggests that, when adult learners are provided with comprehensive input, they engage in a variety of behaviors to extract meaning from it. (73 references) (GLR)

  10. Slowed Speech Input Has a Differential Impact on On-Line and Off-Line Processing in Children's Comprehension of Pronouns

    ERIC Educational Resources Information Center

    Love, Tracy; Walenski, Matthew; Swinney, David

    2009-01-01

    The central question underlying this study revolves around how children process co-reference relationships--such as those evidenced by pronouns ("him") and reflexives ("himself")--and how a slowed rate of speech input may critically affect this process. Previous studies of child language processing have demonstrated that typical language…

  11. Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.

    PubMed

    Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark

    2008-04-01

    To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.

  12. Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    2003-01-01

    This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.

  13. Guidance for Selecting Input Parameters in Modeling the Environmental Fate and Transport of Pesticides

    EPA Pesticide Factsheets

    Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.

  14. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  15. Engine control techniques to account for fuel effects

    DOEpatents

    Kumar, Shankar; Frazier, Timothy R.; Stanton, Donald W.; Xu, Yi; Bunting, Bruce G.; Wolf, Leslie R.

    2014-08-26

    A technique for engine control to account for fuel effects including providing an internal combustion engine and a controller to regulate operation thereof, the engine being operable to combust a fuel to produce an exhaust gas; establishing a plurality of fuel property inputs; establishing a plurality of engine performance inputs; generating engine control information as a function of the fuel property inputs and the engine performance inputs; and accessing the engine control information with the controller to regulate at least one engine operating parameter.

  16. Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual

    DTIC Science & Technology

    1988-12-01

    corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21

  17. Flight test maneuvers for closed loop lateral-directional modeling of the F-18 High Alpha Research Vehicle (HARV) using forebody strakes

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.

  18. The Application of a Statistical Analysis Software Package to Explosive Testing

    DTIC Science & Technology

    1993-12-01

    deviation not corrected for test interval. M refer to equation 2. s refer to equation 3. G refer to section 2.1, C 36 Appendix I : Program Structured ...APPENDIX I: Program Structured Diagrams 37 APPENDIX II: Bruceton Reference Graphs 39 APPENDIX III: Input and Output Data File Format 44 APPENDIX IV...directly from Graph II, which has been digitised and incorporated into the program . IfM falls below 0.3, the curve that is closest to diff( eq . 3a) is

  19. Decentralized model reference adaptive control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan

    1988-01-01

    A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.

  20. Air Land Sea Bulletin. Issue No. 2010-2, May 2010

    DTIC Science & Technology

    2010-05-01

    progresses, flight leads should reference each J3.5 via bullseye and/or TN (i.e., TN 12345 would be passed as “ JACKAL 12345”) to convey the picture to...call to reference data link display; may be followed by amplifying info JACKAL Surveillance NPG of Link 16/TADIL J Reference surveillance track...numbers with the term “ JACKAL <TN>” Normally used in reference to land track (3.5). COPY Directive call to input a hooked symbol on the TAD into the

Top