Analysis of the Impact of Realistic Wind Size Parameter on the Delft3D Model
NASA Astrophysics Data System (ADS)
Washington, M. H.; Kumar, S.
2017-12-01
The wind size parameter, which is the distance from the center of the storm to the location of the maximum winds, is currently a constant in the Delft3D model. As a result, the Delft3D model's output prediction of the water levels during a storm surge are inaccurate compared to the observed data. To address these issues, an algorithm to calculate a realistic wind size parameter for a given hurricane was designed and implemented using the observed water-level data for Hurricane Matthew. A performance evaluation experiment was conducted to demonstrate the accuracy of the model's prediction of water levels using the realistic wind size input parameter compared to the default constant wind size parameter for Hurricane Matthew, with the water level data observed from October 4th, 2016 to October 9th, 2016 from National Oceanic and Atmospheric Administration (NOAA) as a baseline. The experimental results demonstrate that the Delft3D water level output for the realistic wind size parameter, compared to the default constant size parameter, matches more accurately with the NOAA reference water level data.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier
2010-01-01
proposed by Pasion and Oldenburg [25]: Q(t) = kt−βe−γt. (10) Various combinations of these fitting parameters can be used as inputs to classifier... Pasion -Oldenburg parameters k, β, and γ for each anomaly by a direct nonlinear least-squares fit of (10) and by linear (pseudo)inversion of its...combinations of the Pasion -Oldenburg parameters. Com- bining k and γ yields results similar to those of k and R, as Figure 7 and Table 2 show. Figure 8 and
Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.
Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew
2014-12-26
Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.
Evanescent-wave bonding between optical waveguides.
Povinelli, Michelle L; Loncar, Marko; Ibanescu, Mihai; Smythe, Elizabeth J; Johnson, Steven G; Capasso, Federico; Joannopoulos, John D
2005-11-15
Forces arising from overlap between the guided waves of parallel, microphotonic waveguides are calculated. Both attractive and repulsive forces, determined by the choice of relative input phase, are found. Using realistic parameters for a silicon-on-insulator material system, we estimate that the forces are large enough to cause observable displacements. Our results illustrate the potential for a broader class of optically tunable microphotonic devices and microstructured artificial materials.
Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance
NASA Astrophysics Data System (ADS)
Kornfeld, Gertrude H.
1987-09-01
Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.
Cellular and Network Mechanisms Underlying Information Processing in a Simple Sensory System
NASA Technical Reports Server (NTRS)
Jacobs, Gwen; Henze, Chris; Biegel, Bryan (Technical Monitor)
2002-01-01
Realistic, biophysically-based compartmental models were constructed of several primary sensory interneurons in the cricket cercal sensory system. A dynamic atlas of the afferent input to these cells was used to set spatio-temporal parameters for the simulated stimulus-dependent synaptic inputs. We examined the roles of dendritic morphology, passive membrane properties, and active conductances on the frequency tuning of the neurons. The sensitivity of narrow-band low pass interneurons could be explained entirely by the electronic structure of the dendritic arbors and the dynamic sensitivity of the SIZ. The dynamic characteristics of interneurons with higher frequency sensitivity required models with voltage-dependent dendritic conductances.
Comment on ``Symmetry and structure of quantized vortices in superfluid 3'
NASA Astrophysics Data System (ADS)
Sauls, J. A.; Serene, J. W.
1985-10-01
Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.
Confirmation of saturation equilibrium conditions in crater populations
NASA Technical Reports Server (NTRS)
Hartmann, William K.; Gaskell, Robert W.
1993-01-01
We have continued work on realistic numerical models of cratered surfaces, as first reported at last year's LPSC. We confirm the saturation equilibrium level with a new, independent test. One of us has developed a realistic computer simulation of a cratered surface. The model starts with a smooth surface or fractal topography, and adds primary craters according to the cumulative power law with exponent -1.83, as observed on lunar maria and Martian plains. Each crater has an ejecta blanket with the volume of the crater, feathering out to a distance of 4 crater radii. We use the model to test the levels of saturation equilibrium reached in naturally occurring systems, by increasing crater density and observing its dependence on various parameters. In particular, we have tested to see if these artificial systems reach the level found by Hartmann on heavily cratered planetary surfaces, hypothesized to be the natural saturation equilibrium level. This year's work gives the first results of a crater population that includes secondaries. Our model 'Gaskell-4' (September, 1992) includes primaries as described above, but also includes a secondary population, defined by exponent -4. We allowed the largest secondary from each primary to be 0.10 times the size of the primary. These parameters will be changed to test their effects in future models. The model gives realistic images of a cratered surface although it appears richer in secondaries than real surfaces are. The effect of running the model toward saturation gives interesting results for the diameter distribution. Our most heavily cratered surface had the input number of primary craters reach about 0.65 times the hypothesized saturation equilibrium, but the input number rises to more than 100 times that level for secondaries below 1.4 km in size.
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Oblozinsky, P.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Capote,R.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
Padhi, Radhakant; Bhardhwaj, Jayender R
2009-06-01
An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.
El-Houjeiri, Hassan M; Brandt, Adam R; Duffy, James E
2013-06-04
Existing transportation fuel cycle emissions models are either general and calculate nonspecific values of greenhouse gas (GHG) emissions from crude oil production, or are not available for public review and auditing. We have developed the Oil Production Greenhouse Gas Emissions Estimator (OPGEE) to provide open-source, transparent, rigorous GHG assessments for use in scientific assessment, regulatory processes, and analysis of GHG mitigation options by producers. OPGEE uses petroleum engineering fundamentals to model emissions from oil and gas production operations. We introduce OPGEE and explain the methods and assumptions used in its construction. We run OPGEE on a small set of fictional oil fields and explore model sensitivity to selected input parameters. Results show that upstream emissions from petroleum production operations can vary from 3 gCO2/MJ to over 30 gCO2/MJ using realistic ranges of input parameters. Significant drivers of emissions variation are steam injection rates, water handling requirements, and rates of flaring of associated gas.
Electronic Structure and Transport in Magnetic Multilayers
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2008-02-18
ORNL assisted Seagate Recording Heads Operations in the development of CIPS pin Valves for application as read sensors in hard disk drives. Personnel at ORNL were W. H. Butler and Xiaoguang Zhang. Dr. Olle Heinonen from Seagate RHO also participated. ORNL provided codes and materials parameters that were used by Seagate to model CIP GMR in their heads. The objectives were to: (1) develop a linearized Boltzmann transport code for describing CIP GMR based on realistic models of the band structure and interfaces in materials in CIP spin valves in disk drive heads; (2) calculate the materials parameters needed asmore » inputs to the Boltzmann code; and (3) transfer the technology to Seagate Recording Heads.« less
NASA Astrophysics Data System (ADS)
Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.
2014-02-01
Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Monte Carlo generators for studies of the 3D structure of the nucleon
Avakian, Harut; D'Alesio, U.; Murgia, F.
2015-01-23
In this study, extraction of transverse momentum and space distributions of partons from measurements of spin and azimuthal asymmetries requires development of a self consistent analysis framework, accounting for evolution effects, and allowing control of systematic uncertainties due to variations of input parameters and models. Development of realistic Monte-Carlo generators, accounting for TMD evolution effects, spin-orbit and quark-gluon correlations will be crucial for future studies of quark-gluon dynamics in general and 3D structure of the nucleon in particular.
Soliton propagation in tapered silicon core fibers.
Peacock, Anna C
2010-11-01
Numerical simulations are used to investigate soliton-like propagation in tapered silicon core optical fibers. The simulations are based on a realistic tapered structure with nanoscale core dimensions and a decreasing anomalous dispersion profile to compensate for the effects of linear and nonlinear loss. An intensity misfit parameter is used to establish the optimum taper dimensions that preserve the pulse shape while reducing temporal broadening. Soliton formation from Gaussian input pulses is also observed--further evidence of the potential for tapered silicon fibers to find use in a range of signal processing applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.
2013-12-01
With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less
Economic design of control charts considering process shift distributions
NASA Astrophysics Data System (ADS)
Vommi, Vijayababu; Kasarapu, Rukmini V.
2014-09-01
Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.
Aumentado-Armstrong, Tristan; Metzen, Michael G; Sproule, Michael K J; Chacron, Maurice J
2015-10-01
Neurons that respond selectively but in an invariant manner to a given feature of natural stimuli have been observed across species and systems. Such responses emerge in higher brain areas, thereby suggesting that they occur by integrating afferent input. However, the mechanisms by which such integration occurs are poorly understood. Here we show that midbrain electrosensory neurons can respond selectively and in an invariant manner to heterogeneity in behaviorally relevant stimulus waveforms. Such invariant responses were not seen in hindbrain electrosensory neurons providing afferent input to these midbrain neurons, suggesting that response invariance results from nonlinear integration of such input. To test this hypothesis, we built a model based on the Hodgkin-Huxley formalism that received realistic afferent input. We found that multiple combinations of parameter values could give rise to invariant responses matching those seen experimentally. Our model thus shows that there are multiple solutions towards achieving invariant responses and reveals how subthreshold membrane conductances help promote robust and invariant firing in response to heterogeneous stimulus waveforms associated with behaviorally relevant stimuli. We discuss the implications of our findings for the electrosensory and other systems.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Optimal continuous variable quantum teleportation protocol for realistic settings
NASA Astrophysics Data System (ADS)
Luiz, F. S.; Rigolin, Gustavo
2015-03-01
We show the optimal setup that allows Alice to teleport coherent states | α > to Bob giving the greatest fidelity (efficiency) when one takes into account two realistic assumptions. The first one is the fact that in any actual implementation of the continuous variable teleportation protocol (CVTP) Alice and Bob necessarily share non-maximally entangled states (two-mode finitely squeezed states). The second one assumes that Alice's pool of possible coherent states to be teleported to Bob does not cover the whole complex plane (| α | < ∞). The optimal strategy is achieved by tuning three parameters in the original CVTP, namely, Alice's beam splitter transmittance and Bob's displacements in position and momentum implemented on the teleported state. These slight changes in the protocol are currently easy to be implemented and, as we show, give considerable gain in performance for a variety of possible pool of input states with Alice.
Image-Based Reverse Engineering and Visual Prototyping of Woven Cloth.
Schroder, Kai; Zinke, Arno; Klein, Reinhard
2015-02-01
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
Simplifying BRDF input data for optical signature modeling
NASA Astrophysics Data System (ADS)
Hallberg, Tomas; Pohl, Anna; Fagerström, Jan
2017-05-01
Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.
A gridded global description of the ionosphere and thermosphere for 1996 - 2000
NASA Astrophysics Data System (ADS)
Ridley, A.; Kihn, E.; Kroehl, H.
The modeling and simulation community has asked for a realistic representation of the near-Earth space environment covering a significant number of years to be used in scientific and engineering applications. The data, data management systems, assimilation techniques, physical models, and computer resources are now available to construct a realistic description of the ionosphere and thermosphere over a 5 year period. DMSP and NOAA POES satellite data and solar emissions were used to compute Hall and Pederson conductances in the ionosphere. Interplanetary magnetic field measurements on the ACE satellite define average electrostatic potential patterns over the northern and southern Polar Regions. These conductances, electric field patterns, and ground-based magnetometer data were input to the Assimilative Mapping of Ionospheric Electrodynamics model to compute the distribution of electric fields and currents in the ionosphere. The Global Thermosphere Ionosphere Model (GITM) used the ionospheric electrodynamic parameters to compute the distribution of particles and fields in the ionosphere and thermosphere. GITM uses a general circulation approach to solve the fundamental equations. Model results offer a unique opportunity to assess the relative importance of different forcing terms under a variety of conditions as well as the accuracies of different estimates of ionospheric electrodynamic parameters.
NASA Technical Reports Server (NTRS)
Norbury, John W.
1992-01-01
Single nucleon removal in relativistic and intermediate energy nucleus-nucleus collisions is studied using a generalization of Weizsacker-Williams theory that treats each electromagnetic multipole separately. Calculations are presented for electric dipole and quadrupole excitations and incorporate a realistic minimum impact parameter, Coulomb recoil corrections, and the uncertainties in the input photonuclear data. Discrepancies are discussed. The maximum quadrupole effect to be observed in future experiments is estimated and also an analysis of the charge dependence of the electromagnetic cross sections down to energies as low as 100 MeV/nucleon is made.
NASA Technical Reports Server (NTRS)
Norbury, J. W.; Townsend, L. W. (Principal Investigator)
1990-01-01
Single-nucleon removal in relativistic and intermediate energy nucleus-nucleus collisions is studied using a generalization of Weizsacker-Williams theory that treats each electromagnetic multipole separately. Calculations are presented for electric dipole and quadrupole excitations and incorporate a realistic minimum impact parameter, Coulomb recoil corrections, and the uncertainties in the input photonuclear data. Discrepancies are discussed. The maximum quadrupole effect to be observed in future experiments is estimated and also an analysis of the charge dependence of the electromagnetic cross sections down to energies as low as 100 MeV/nucleon is made.
The effect of a realistic thermal diffusivity on numerical model of a subducting slab
NASA Astrophysics Data System (ADS)
Maierova, P.; Steinle-Neumann, G.; Cadek, O.
2010-12-01
A number of numerical studies of subducting slab assume simplified (constant or only depth-dependent) models of thermal conductivity. The available mineral physics data indicate, however, that thermal diffusivity is strongly temperature- and pressure-dependent and may also vary among different mantle materials. In the present study, we examine the influence of realistic thermal properties of mantle materials on the thermal state of the upper mantle and the dynamics of subducting slabs. On the basis of the data published in mineral physics literature we compile analytical relationships that approximate the pressure and temperature dependence of thermal diffusivity for major mineral phases of the mantle (olivine, wadsleyite, ringwoodite, garnet, clinopyroxenes, stishovite and perovskite). We propose a simplified composition of mineral assemblages predominating in the subducting slab and the surrounding mantle (pyrolite, mid-ocean ridge basalt, harzburgite) and we estimate their thermal diffusivity using the Hashin-Shtrikman bounds. The resulting complex formula for the diffusivity of each aggregate is then approximated by a simpler analytical relationship that is used in our numerical model as an input parameter. For the numerical modeling we use the Elmer software (open source finite element software for multiphysical problems, see http://www.csc.fi/english/pages/elmer). We set up a 2D Cartesian thermo-mechanical steady-state model of a subducting slab. The model is partly kinematic as the flow is driven by a boundary condition on velocity that is prescribed on the top of the subducting lithospheric plate. Reology of the material is non-linear and is coupled with the thermal equation. Using the realistic relationship for thermal diffusivity of mantle materials, we compute the thermal and flow fields for different input velocity and age of the subducting plate and we compare the results against the models assuming a constant thermal diffusivity. The importance of the realistic description of thermal properties in models of subducted slabs is discussed.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Parametric Studies for Scenario Earthquakes: Site Effects and Differential Motion
NASA Astrophysics Data System (ADS)
Panza, G. F.; Panza, G. F.; Romanelli, F.
2001-12-01
In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html
Identifying Crucial Parameter Correlations Maintaining Bursting Activity
Doloc-Mihu, Anca; Calabrese, Ronald L.
2014-01-01
Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.
Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan
2016-03-01
Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.
Multiscale stochastic simulations for tensile testing of nanotube-based macroscopic cables.
Pugno, Nicola M; Bosia, Federico; Carpinteri, Alberto
2008-08-01
Thousands of multiscale stochastic simulations are carried out in order to perform the first in-silico tensile tests of carbon nanotube (CNT)-based macroscopic cables with varying length. The longest treated cable is the space-elevator megacable but more realistic shorter cables are also considered in this bottom-up investigation. Different sizes, shapes, and concentrations of defects are simulated, resulting in cable macrostrengths not larger than approximately 10 GPa, which is much smaller than the theoretical nanotube strength (approximately 100 GPa). No best-fit parameters are present in the multiscale simulations: the input at level 1 is directly estimated from nanotensile tests of CNTs, whereas its output is considered as the input for the level 2, and so on up to level 5, corresponding to the megacable. Thus, five hierarchical levels are used to span lengths from that of a single nanotube (approximately 100 nm) to that of the space-elevator megacable (approximately 100 Mm).
Developing a realistic-prototyping road user cost evaluation tool for FDOT.
DOT National Transportation Integrated Search
2008-12-31
The objective of this project is to develop a realistic-prototyping RUC (Road User Cost) calculation tool that is userfriendly : and utilizing limited number of data inputs that are easy to use. The tool can help engineers to estimate RUC on : specif...
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
Stimulus dependence of local field potential spectra: experiment versus theory.
Barbieri, Francesca; Mazzoni, Alberto; Logothetis, Nikos K; Panzeri, Stefano; Brunel, Nicolas
2014-10-29
The local field potential (LFP) captures different neural processes, including integrative synaptic dynamics that cannot be observed by measuring only the spiking activity of small populations. Therefore, investigating how LFP power is modulated by external stimuli can offer important insights into sensory neural representations. However, gaining such insight requires developing data-driven computational models that can identify and disambiguate the neural contributions to the LFP. Here, we investigated how networks of excitatory and inhibitory integrate-and-fire neurons responding to time-dependent inputs can be used to interpret sensory modulations of LFP spectra. We computed analytically from such models the LFP spectra and the information that they convey about input and used these analytical expressions to fit the model to LFPs recorded in V1 of anesthetized macaques (Macaca mulatta) during the presentation of color movies. Our expressions explain 60%-98% of the variance of the LFP spectrum shape and its dependency upon movie scenes and we achieved this with realistic values for the best-fit parameters. In particular, synaptic best-fit parameters were compatible with experimental measurements and the predictions of firing rates, based only on the fit of LFP data, correlated with the multiunit spike rate recorded from the same location. Moreover, the parameters characterizing the input to the network across different movie scenes correlated with cross-scene changes of several image features. Our findings suggest that analytical descriptions of spiking neuron networks may become a crucial tool for the interpretation of field recordings. Copyright © 2014 the authors 0270-6474/14/3414589-17$15.00/0.
Nucleon decay in non-minimal supersymmetric SO(10)
NASA Astrophysics Data System (ADS)
Macpherson, Alick L.
1996-02-01
Evaluation of nucleon decay modes and branching ratios in a non-minimal supersymmetric SO(10) grand unified theory is presented. The non-minimal GUT considered is the supersymmetrised version of the 'realistic' SO(10) model originally proposed by Harvey, Reiss and Ramond, which is realistic in that it gives acceptable charged fermion and neutrino masses within the context of a phenomenological fit to the low-energy standard model inputs. Despite a complicated Higgs sector, the SO(10) 10 Higgs superfield mass insertion is found to be the sole contribution to the tree-level F-term governing nucleon decay. The resulting dimension-5 operators that mediate nucleon decay give branching ratio predictions parameterised by a single parameter, the ratio of the Yukawa couplings of the 10 to the fermion generations. For parameter values corresponding to a lack of dominance of the third family self-coupling, the dominant nucleon decay modes are p → K + + overlineνμand n → K 0 + overlineνμ as expected. Further, the charged muon decay modes are enhanced by two orders of magnitude over the standard minimal SUSY SU(5) predictions, thus predicting a distinct spectrum of 'visible' modes. These charged muon decay modes, along with p → π + + overlineνμand n → π 0 + overlineνμ, which are moderately enhanced over the SUSY SU(5) prediction, suggest a distinguishing fingerprint of this particular GUT model, and if nucleon decay is observed at Super-KAMIOKANDE the predicted branching ratio spectrum can be used to determine the validity of this 'realistic' SO(10) SUSY GUT model.
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Gsflow-py: An integrated hydrologic model development tool
NASA Astrophysics Data System (ADS)
Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.
2017-12-01
Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.
NASA Astrophysics Data System (ADS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato
2017-12-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.
Application and Evaluation of MODIS LAI, FPAR, and Albedo ...
MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temperature and reduced bias and error for 2-m mixing ratio. Recently, the WRF/CMAQ land surface and boundary laywer processes have been updated. In this study, MODIS vegetation and albedo data are input to the updated WRF/CMAQ meteorology and air quality simulations for 2006 over a North American (NA) 12-km domain. The evaluation of the simulation results shows that the updated WRF/CMAQ system improves 2-m temperature estimates over the pre-update base modeling system estimates. The MODIS vegetation input produces a realistic spring green-up that progresses through time from the south to north. Overall, MODIS input reduces 2-m mixing ration bias during the growing season. The NA west shows larger positive O3 bias during the growing season because of reduced gas phase deposition resulting from lower O3 deposition velocities driven by reduced vegetation cover. The O3 bias increase associated with the realistic vegetation representation indicates that further improvement may be needed in the WRF/CMAQ system. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment. AMAD’s rese
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
A Radiosity Approach to Realistic Image Synthesis
1992-12-01
AD-A259 082 AFIT/GCE/ENG/92D-09 A RADIOSITY APPROACH TO REALISTIC IMAGE SYNTHESIS THESIS Richard L. Remington Captain, USAF fl ECTE AFIT/GCE/ENG/92D...09 SJANl 1993U 93-00134 Approved for public release; distribution unlimited 93& 1! A -A- AFIT/GCE/ENG/92D-09 A RADIOSITY APPROACH TO REALISTIC IMAGE...assistance in creating the input geometry file for the AWACS aircraft interior. Without his assistance, a good model for the diffuse radiosity implementation
User's Guide for Monthly Vector Wind Profile Model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1999-01-01
The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.
Tunneling of heat: Beyond linear response regime
NASA Astrophysics Data System (ADS)
Walczak, Kamil; Saroka, David
2018-02-01
We examine nanoscale processes of heat (energy) transfer as carried by electrons tunneling via potential barriers and molecular interconnects between two heat reservoirs (thermal baths). For that purpose, we use Landauer-type formulas to calculate thermal conductance and quadratic correction to heat flux flowing via quantum systems. As an input, we implement analytical expressions for transmission functions related to simple potential barriers and atomic bridges. Our results are discussed with respect to energy of tunneling electrons, temperature, the presence of resonant states, and specific parameters characterizing potential barriers as well as heat carriers. The simplicity of semi-analytical models developed by us allows to fit experimental data and extract crucial information about the values of model parameters. Further investigations are expected for more realistic transmission functions, while time-dependent aspects of nanoscale heat transfer may be addressed by using the concept of wave packets scattered on potential barriers and point-like defects within regular (periodic) nanostructures.
Advances in land modeling of KIAPS based on the Noah Land Surface Model
NASA Astrophysics Data System (ADS)
Koo, Myung-Seo; Baek, Sunghye; Seol, Kyung-Hee; Cho, Kyoungmi
2017-08-01
As of 2013, the Noah Land Surface Model (LSM) version 2.7.1 was implemented in a new global model being developed at the Korea Institute of Atmospheric Prediction Systems (KIAPS). This land surface scheme is further refined in two aspects, by adding new physical processes and by updating surface input parameters. Thus, the treatment of glacier land, sea ice, and snow cover are addressed more realistically. Inconsistencies in the amount of absorbed solar flux at ground level by the land surface and radiative processes are rectified. In addition, new parameters are available by using 1-km land cover data, which had usually not been possible at a global scale. Land surface albedo/emissivity climatology is newly created using Moderate-Resolution Imaging Spectroradiometer (MODIS) satellitebased data and adjusted parameterization. These updates have been applied to the KIAPS-developed model and generally provide a positive impact on near-surface weather forecasting.
Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.
Huson, Daniel H; Linz, Simone
2018-01-01
A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.
A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty
NASA Astrophysics Data System (ADS)
Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl
2012-05-01
The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.
Critical Seismic Vector Random Excitations for Multiply Supported Structures
NASA Astrophysics Data System (ADS)
Sarkar, A.; Manohar, C. S.
1998-05-01
A method for determining critical power spectral density matrix models for earthquake excitations which maximize steady response variance of linear multiply supported extended structures and which also satisfy constraints on input variance, zero crossing rates, frequency content and transmission time lag has been developed. The optimization problem is shown to be non-linear in nature and solutions are obtained by using an iterative technique which is based on linear programming method. A constraint on entropy rate as a measure of uncertainty which can be expected in realistic earthquake ground motions is proposed which makes the critical excitations more realistic. Two special cases are also considered. Firstly, when knowledge of autospectral densities is available, the critical response is shown to be produced by fully coherent excitations which are neither in-phase nor out-of-phase. The critical phase between the excitation components depends on structural parameters, but independent of the auto-spectral densities of the excitations. Secondly, when the knowledge of autospectral densities and phase spectrum of the excitations is available, the critical response is shown to be produced by a system dependent coherence function representing neither fully coherent nor fully incoherent ground motions. The applications of these special cases are discussed in the context of land-based extended structures and secondary systems such as nuclear piping assembly. Illustrative examples on critical inputs and response of sdof and a long-span suspended cable which demonstrated the various features of the approach developed are presented.
Predicting the vibroacoustic response of satellite equipment panels.
Conlon, S C; Hambric, S A
2003-03-01
Modern satellites are constructed of large, lightweight equipment panels that are strongly excited by acoustic pressures during launch. During design, performing vibroacoustic analyses to evaluate and ensure the integrity of the complex electronics mounted on the panels is critical. In this study the attached equipment is explicitly addressed and how its properties affect the panel responses is characterized. FEA and BEA methods are used to derive realistic parameters to input to a SEA hybrid model of a panel with multiple attachments. Specifically, conductance/modal density and radiation efficiency for nonhomogeneous panel structures with and without mass loading are computed. The validity of using the spatially averaged conductance of panels with irregular features for deriving the structure modal density is demonstrated. Maidanik's proposed method of modifying the traditional SEA input power is implemented, illustrating the importance of accounting for system internal couplings when calculating the external input power. The predictions using the SEA hybrid model agree with the measured data trends, and are found to be most sensitive to the assumed dynamic mass ratio (attachments/structure) and the attachment internal loss factor. Additional experimental and analytical investigations are recommended to better characterize dynamic masses, modal densities and loss factors.
The ITER ICRF Antenna Design with TOPICA
NASA Astrophysics Data System (ADS)
Milanesio, Daniele; Maggiora, Riccardo; Meneghini, Orso; Vecchi, Giuseppe
2007-11-01
TOPICA (Torino Polytechnic Ion Cyclotron Antenna) code is an innovative tool for the 3D/1D simulation of Ion Cyclotron Radio Frequency (ICRF), i.e. accounting for antennas in a realistic 3D geometry and with an accurate 1D plasma model [1]. The TOPICA code has been deeply parallelized and has been already proved to be a reliable tool for antennas design and performance prediction. A detailed analysis of the 24 straps ITER ICRF antenna geometry has been carried out, underlining the strong dependence and asymmetries of the antenna input parameters due to the ITER plasma response. We optimized the antenna array geometry dimensions to maximize loading, lower mutual couplings and mitigate sheath effects. The calculated antenna input impedance matrices are TOPICA results of a paramount importance for the tuning and matching system design. Electric field distributions have been also calculated and they are used as the main input for the power flux estimation tool. The designed optimized antenna is capable of coupling 20 MW of power to plasma in the 40 -- 55 MHz frequency range with a maximum voltage of 45 kV in the feeding coaxial cables. [1] V. Lancellotti et al., Nuclear Fusion, 46 (2006) S476-S499
A portfolio-based approach to optimize proof-of-concept clinical trials.
Mallinckrodt, Craig; Molenberghs, Geert; Persinger, Charles; Ruberg, Stephen; Sashegyi, Andreas; Lindborg, Stacy
2012-01-01
Improving proof-of-concept (PoC) studies is a primary lever for improving drug development. Since drug development is often done by institutions that work on multiple drugs simultaneously, the present work focused on optimum choices for rates of false positive (α) and false negative (β) results across a portfolio of PoC studies. Simple examples and a newly derived equation provided conceptual understanding of basic principles regarding optimum choices of α and β in PoC trials. In examples that incorporated realistic development costs and constraints, the levels of α and β that maximized the number of approved drugs and portfolio value varied by scenario. Optimum choices were sensitive to the probability the drug was effective and to the proportion of total investment cost prior to establishing PoC. Results of the present investigation agree with previous research in that it is important to assess optimum levels of α and β. However, the present work also highlighted the need to consider cost structure using realistic input parameters relevant to the question of interest.
Perfusion kinetics in human brain tumor with DCE-MRI derived model and CFD analysis.
Bhandari, A; Bansal, A; Singh, A; Sinha, N
2017-07-05
Cancer is one of the leading causes of death all over the world. Among the strategies that are used for cancer treatment, the effectiveness of chemotherapy is often hindered by factors such as irregular and non-uniform uptake of drugs inside tumor. Thus, accurate prediction of drug transport and deposition inside tumor is crucial for increasing the effectiveness of chemotherapeutic treatment. In this study, a computational model of human brain tumor is developed that incorporates dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) data into a voxelized porous media model. The model takes into account realistic transport and perfusion kinetics parameters together with realistic heterogeneous tumor vasculature and accurate arterial input function (AIF), which makes it patient specific. The computational results for interstitial fluid pressure (IFP), interstitial fluid velocity (IFV) and tracer concentration show good agreement with the experimental results. The computational model can be extended further for predicting the deposition of chemotherapeutic drugs in tumor environment as well as selection of the best chemotherapeutic drug for a specific patient. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Four-dimensional guidance algorithms for aircraft in an air traffic control environment
NASA Technical Reports Server (NTRS)
Pecsvaradi, T.
1975-01-01
Theoretical development and computer implementation of three guidance algorithms are presented. From a small set of input parameters the algorithms generate the ground track, altitude profile, and speed profile required to implement an experimental 4-D guidance system. Given a sequence of waypoints that define a nominal flight path, the first algorithm generates a realistic, flyable ground track consisting of a sequence of straight line segments and circular arcs. Each circular turn is constrained by the minimum turning radius of the aircraft. The ground track and the specified waypoint altitudes are used as inputs to the second algorithm which generates the altitude profile. The altitude profile consists of piecewise constant flight path angle segments, each segment lying within specified upper and lower bounds. The third algorithm generates a feasible speed profile subject to constraints on the rate of change in speed, permissible speed ranges, and effects of wind. Flight path parameters are then combined into a chronological sequence to form the 4-D guidance vectors. These vectors can be used to drive the autopilot/autothrottle of the aircraft so that a 4-D flight path could be tracked completely automatically; or these vectors may be used to drive the flight director and other cockpit displays, thereby enabling the pilot to track a 4-D flight path manually.
Modeling bed load transport and step-pool morphology with a reduced-complexity approach
NASA Astrophysics Data System (ADS)
Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo
2016-04-01
Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of complex morphological systems and help to better identify the key physical principles that rule their dynamics.
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
Pirozzi, Enrica
2018-04-01
High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.
Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities
NASA Astrophysics Data System (ADS)
Baylin-Stern, Adam C.
This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.
CHARMM-GUI Membrane Builder toward realistic biological membrane simulations.
Wu, Emilia L; Cheng, Xi; Jo, Sunhwan; Rui, Huan; Song, Kevin C; Dávila-Contreras, Eder M; Qi, Yifei; Lee, Jumin; Monje-Galvan, Viviana; Venable, Richard M; Klauda, Jeffery B; Im, Wonpil
2014-10-15
CHARMM-GUI Membrane Builder, http://www.charmm-gui.org/input/membrane, is a web-based user interface designed to interactively build all-atom protein/membrane or membrane-only systems for molecular dynamics simulations through an automated optimized process. In this work, we describe the new features and major improvements in Membrane Builder that allow users to robustly build realistic biological membrane systems, including (1) addition of new lipid types, such as phosphoinositides, cardiolipin (CL), sphingolipids, bacterial lipids, and ergosterol, yielding more than 180 lipid types, (2) enhanced building procedure for lipid packing around protein, (3) reliable algorithm to detect lipid tail penetration to ring structures and protein surface, (4) distance-based algorithm for faster initial ion displacement, (5) CHARMM inputs for P21 image transformation, and (6) NAMD equilibration and production inputs. The robustness of these new features is illustrated by building and simulating a membrane model of the polar and septal regions of E. coli membrane, which contains five lipid types: CL lipids with two types of acyl chains and phosphatidylethanolamine lipids with three types of acyl chains. It is our hope that CHARMM-GUI Membrane Builder becomes a useful tool for simulation studies to better understand the structure and dynamics of proteins and lipids in realistic biological membrane environments. Copyright © 2014 Wiley Periodicals, Inc.
Parameter Balancing in Kinetic Models of Cell Metabolism†
2010-01-01
Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890
Satellite estimation of surface spectral ultraviolet irradiance using OMI data in East Asia
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, J.; Jeong, U.
2017-12-01
Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface ultraviolet (UV) irradiance is important nowadays. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UV irradiance estimation. Also UV index (UVI) is a simple parameter to show the strength of surface UV irradiance, therefore UVI has been widely utilized for the purpose of UV monitoring. In this study, we estimate surface UV irradiance at East Asia using realistic input based on OMI Total Ozone and reflectivity, and then validate this estimated comparing to UV irradiance from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) data. In this work, we also try to develop our own retrieval algorithm for better estimation of surface irradiance. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UV irradiance calculation. The input to the VLIDORT radiative transfer calculations are the total ozone column (TOMS V7 climatology), the surface albedo (Herman and Celarier, 1997) and the cloud optical depth. Based on these, the UV irradiance is calculated based on look-up table (LUT) approach. To correct absorbing aerosol, UV irradiance algorithm added climatological aerosol information (Arola et al., 2009). The further study, we analyze the comprehensive uncertainty analysis based on LUT and all input parameters.
Separating figure from ground with a parallel network.
Kienker, P K; Sejnowski, T J; Hinton, G E; Schumacher, L E
1986-01-01
The differentiation of figure from ground plays an important role in the perceptual organization of visual stimuli. The rapidity with which we can discriminate the inside from the outside of a figure suggests that at least this step in the process may be performed in visual cortex by a large number of neurons in several different areas working together in parallel. We have attempted to simulate this collective computation by designing a network of simple processing units that receives two types of information: bottom-up input from the image containing the outlines of a figure, which may be incomplete, and a top-down attentional input that biases one part of the image to be the inside of the figure. No presegmentation of the image was assumed. Two methods for performing the computation were explored: gradient descent, which seeks locally optimal states, and simulated annealing, which attempts to find globally optimal states by introducing noise into the computation. For complete outlines, gradient descent was faster, but the range of input parameters leading to successful performance was very narrow. In contrast, simulated annealing was more robust: it worked over a wider range of attention parameters and a wider range of outlines, including incomplete ones. Our network model is too simplified to serve as a model of human performance, but it does demonstrate that one global property of outlines can be computed through local interactions in a parallel network. Some features of the model, such as the role of noise in escaping from nonglobal optima, may generalize to more realistic models.
Morphological Constraints on Cerebellar Granule Cell Combinatorial Diversity.
Gilmer, Jesse I; Person, Abigail L
2017-12-13
Combinatorial expansion by the cerebellar granule cell layer (GCL) is fundamental to theories of cerebellar contributions to motor control and learning. Granule cells (GrCs) sample approximately four mossy fiber inputs and are thought to form a combinatorial code useful for pattern separation and learning. We constructed a spatially realistic model of the cerebellar GCL and examined how GCL architecture contributes to GrC combinatorial diversity. We found that GrC combinatorial diversity saturates quickly as mossy fiber input diversity increases, and that this saturation is in part a consequence of short dendrites, which limit access to diverse inputs and favor dense sampling of local inputs. This local sampling also produced GrCs that were combinatorially redundant, even when input diversity was extremely high. In addition, we found that mossy fiber clustering, which is a common anatomical pattern, also led to increased redundancy of GrC input combinations. We related this redundancy to hypothesized roles of temporal expansion of GrC information encoding in service of learned timing, and we show that GCL architecture produces GrC populations that support both temporal and combinatorial expansion. Finally, we used novel anatomical measurements from mice of either sex to inform modeling of sparse and filopodia-bearing mossy fibers, finding that these circuit features uniquely contribute to enhancing GrC diversification and redundancy. Our results complement information theoretic studies of granule layer structure and provide insight into the contributions of granule layer anatomical features to afferent mixing. SIGNIFICANCE STATEMENT Cerebellar granule cells are among the simplest neurons, with tiny somata and, on average, just four dendrites. These characteristics, along with their dense organization, inspired influential theoretical work on the granule cell layer as a combinatorial expander, where each granule cell represents a unique combination of inputs. Despite the centrality of these theories to cerebellar physiology, the degree of expansion supported by anatomically realistic patterns of inputs is unknown. Using modeling and anatomy, we show that realistic input patterns constrain combinatorial diversity by producing redundant combinations, which nevertheless could support temporal diversification of like combinations, suitable for learned timing. Our study suggests a neural substrate for producing high levels of both combinatorial and temporal diversity in the granule cell layer. Copyright © 2017 the authors 0270-6474/17/3712153-14$15.00/0.
NASA Astrophysics Data System (ADS)
Campbell, Lucy J.; Shepherd, Theodore G.
2005-12-01
This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag.In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.
NASA Astrophysics Data System (ADS)
Troselj, Josko; Sayama, Takahiro; Varlamov, Sergey M.; Sasaki, Toshiharu; Racault, Marie-Fanny; Takara, Kaoru; Miyazawa, Yasumasa; Kuroki, Ryusuke; Yamagata, Toshio; Yamashiki, Yosuke
2017-12-01
This study demonstrates the importance of accurate extreme discharge input in hydrological and oceanographic combined modeling by introducing two extreme typhoon events. We investigated the effects of extreme freshwater outflow events from river mouths on sea surface salinity distribution (SSS) in the coastal zone of the north-eastern Japan. Previous studies have used observed discharge at the river mouth, as well as seasonally averaged inter-annual, annual, monthly or daily simulated data. Here, we reproduced the hourly peak discharge during two typhoon events for a targeted set of nine rivers and compared their impact on SSS in the coastal zone based on observed, climatological and simulated freshwater outflows in conjunction with verification of the results using satellite remote-sensing data. We created a set of hourly simulated freshwater outflow data from nine first-class Japanese river basins flowing to the western Pacific Ocean for the two targeted typhoon events (Chataan and Roke) and used it with the integrated hydrological (CDRMV3.1.1) and oceanographic (JCOPE-T) model, to compare the case using climatological mean monthly discharges as freshwater input from rivers with the case using our hydrological model simulated discharges. By using the CDRMV model optimized with the SCE-UA method, we successfully reproduced hindcasts for peak discharges of extreme typhoon events at the river mouths and could consider multiple river basin locations. Modeled SSS results were verified by comparison with Chlorophyll-a distribution, observed by satellite remote sensing. The projection of SSS in the coastal zone became more realistic than without including extreme freshwater outflow. These results suggest that our hydrological models with optimized model parameters calibrated to the Typhoon Roke and Chataan cases can be successfully used to predict runoff values from other extreme precipitation events with similar physical characteristics. Proper simulation of extreme typhoon events provides more realistic coastal SSS and may allow a different scenario analysis with various precipitation inputs for developing a nowcasting analysis in the future.
Parameter Extraction Method for the Electrical Model of a Silicon Photomultiplier
NASA Astrophysics Data System (ADS)
Licciulli, Francesco; Marzocca, Cristoforo
2016-10-01
The availability of an effective electrical model, able to accurately reproduce the signals generated by a Silicon Photo-Multiplier coupled to the front-end electronics, is mandatory when the performance of a detection system based on this kind of detector has to be evaluated by means of reliable simulations. We propose a complete extraction procedure able to provide the whole set of the parameters involved in a well-known model of the detector, which includes the substrate ohmic resistance. The technique allows achieving very good quality of the fit between simulation results provided by the model and experimental data, thanks to accurate discrimination between the quenching and substrate resistances, which results in a realistic set of extracted parameters. The extraction procedure has been applied to a commercial device considering a wide range of different conditions in terms of input resistance of the front-end electronics and interconnection parasitics. In all the considered situations, very good correspondence has been found between simulations and measurements, especially for what concerns the leading edge of the current pulses generated by the detector, which strongly affects the timing performance of the detection system, thus confirming the effectiveness of the model and the associated parameter extraction technique.
Surgical stent planning: simulation parameter study for models based on DICOM standards.
Scherer, S; Treichel, T; Ritter, N; Triebel, G; Drossel, W G; Burgert, O
2011-05-01
Endovascular Aneurysm Repair (EVAR) can be facilitated by a realistic simulation model of stent-vessel-interaction. Therefore, numerical feasibility and integrability in the clinical environment was evaluated. The finite element method was used to determine necessary simulation parameters for stent-vessel-interaction in EVAR. Input variables and result data of the simulation model were examined for their standardization using DICOM supplements. The study identified four essential parameters for the stent-vessel simulation: blood pressure, intima constitution, plaque occurrence and the material properties of vessel and plaque. Output quantities such as radial force of the stent and contact pressure between stent/vessel can help the surgeon to evaluate implant fixation and sealing. The model geometry can be saved with DICOM "Surface Segmentation" objects and the upcoming "Implant Templates" supplement. Simulation results can be stored using the "Structured Report". A standards-based general simulation model for optimizing stent-graft selection may be feasible. At present, there are limitations due to specification of individual vessel material parameters and for simulating the proximal fixation of stent-grafts with hooks. Simulation data with clinical relevance for documentation and presentation can be stored using existing or new DICOM extensions.
NASA Technical Reports Server (NTRS)
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation
NASA Astrophysics Data System (ADS)
Guillas, S.; Day, S. J.; Joakim, B.
2016-12-01
We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.
Realistic continuous-variable quantum teleportation with non-Gaussian resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Anno, F.; De Siena, S.; CNR-INFM Coherentia, Napoli, Italy, and CNISM and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi, SA
2010-01-15
We present a comprehensive investigation of nonideal continuous-variable quantum teleportation implemented with entangled non-Gaussian resources. We discuss in a unified framework the main decoherence mechanisms, including imperfect Bell measurements and propagation of optical fields in lossy fibers, applying the formalism of the characteristic function. By exploiting appropriate displacement strategies, we compute analytically the success probability of teleportation for input coherent states and two classes of non-Gaussian entangled resources: two-mode squeezed Bell-like states (that include as particular cases photon-added and photon-subtracted de-Gaussified states), and two-mode squeezed catlike states. We discuss the optimization procedure on the free parameters of the non-Gaussian resourcesmore » at fixed values of the squeezing and of the experimental quantities determining the inefficiencies of the nonideal protocol. It is found that non-Gaussian resources enhance significantly the efficiency of teleportation and are more robust against decoherence than the corresponding Gaussian ones. Partial information on the alphabet of input states allows further significant improvement in the performance of the nonideal teleportation protocol.« less
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla
2015-01-01
Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156
NASA Astrophysics Data System (ADS)
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.
Post-processing of multi-model ensemble river discharge forecasts using censored EMOS
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian
2014-05-01
When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
NASA Technical Reports Server (NTRS)
Brentner, K. S.
1986-01-01
A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Schweighofer, N; Spoelstra, J; Arbib, M A; Kawato, M
1998-01-01
The cerebellum is essential for the control of multijoint movements; when the cerebellum is lesioned, the performance error is more than the summed errors produced by single joints. In the companion paper (Schweighofer et al., 1998), a functional anatomical model for visually guided arm movement was proposed. The model comprised a basic feedforward/feedback controller with realistic transmission delays and was connected to a two-link, six-muscle, planar arm. In the present study, we examined the role of the cerebellum in reaching movements by embedding a novel, detailed cerebellar neural network in this functional control model. We could derive realistic cerebellar inputs and the role of the cerebellum in learning to control the arm was assessed. This cerebellar network learned the part of the inverse dynamics of the arm not provided by the basic feedforward/feedback controller. Despite realistically low inferior olive firing rates and noisy mossy fibre inputs, the model could reduce the error between intended and planned movements. The responses of the different cell groups were comparable to those of biological cell groups. In particular, the modelled Purkinje cells exhibited directional tuning after learning and the parallel fibres, due to their length, provide Purkinje cells with the input required for this coordination task. The inferior olive responses contained two different components; the earlier response, locked to movement onset, was always present and the later response disappeared after learning. These results support the theory that the cerebellum is involved in motor learning.
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Consistency relations for sharp features in the primordial spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooij, Sander; Palma, Gonzalo A.; Panotopoulos, Grigoris
We study the generation of sharp features in the primordial spectra within the framework of effective field theory of inflation, wherein curvature perturbations are the consequence of the dynamics of a single scalar degree of freedom. We identify two sources in the generation of features: rapid variations of the sound speed c{sub s} (at which curvature fluctuations propagate) and rapid variations of the expansion rate H during inflation. With this in mind, we propose a non-trivial relation linking these two quantities that allows us to study the generation of sharp features in realistic scenarios where features are the result ofmore » the simultaneous occurrence of these two sources. This relation depends on a single parameter with a value determined by the particular model (and its numerical input) responsible for the rapidly varying background. As a consequence, we find a one-parameter consistency relation between the shape and size of features in the bispectrum and features in the power spectrum. To substantiate this result, we discuss several examples of models for which this one-parameter relation (between c{sub s} and H) holds, including models in which features in the spectra are both sudden and resonant.« less
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
RECONSTRUCTING THE SOLAR WIND FROM ITS EARLY HISTORY TO CURRENT EPOCH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Airapetian, Vladimir S.; Usmanov, Arcadi V., E-mail: vladimir.airapetian@nasa.gov, E-mail: avusmanov@gmail.com
Stellar winds from active solar-type stars can play a crucial role in removal of stellar angular momentum and erosion of planetary atmospheres. However, major wind properties except for mass-loss rates cannot be directly derived from observations. We employed a three-dimensional magnetohydrodynamic Alfvén wave driven solar wind model, ALF3D, to reconstruct the solar wind parameters including the mass-loss rate, terminal velocity, and wind temperature at 0.7, 2, and 4.65 Gyr. Our model treats the wind thermal electrons, protons, and pickup protons as separate fluids and incorporates turbulence transport, eddy viscosity, turbulent resistivity, and turbulent heating to properly describe proton and electronmore » temperatures of the solar wind. To study the evolution of the solar wind, we specified three input model parameters, the plasma density, Alfvén wave amplitude, and the strength of the dipole magnetic field at the wind base for each of three solar wind evolution models that are consistent with observational constrains. Our model results show that the velocity of the paleo solar wind was twice as fast, ∼50 times denser and 2 times hotter at 1 AU in the Sun's early history at 0.7 Gyr. The theoretical calculations of mass-loss rate appear to be in agreement with the empirically derived values for stars of various ages. These results can provide realistic constraints for wind dynamic pressures on magnetospheres of (exo)planets around the young Sun and other active stars, which is crucial in realistic assessment of the Joule heating of their ionospheres and corresponding effects of atmospheric erosion.« less
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Identification of Low Order Equivalent System Models From Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.
Extending simulation modeling to activity-based costing for clinical procedures.
Glick, N D; Blackmore, C C; Zelman, W N
2000-04-01
A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, R. B.; Carroll, R. M.; Sisman, O.
1971-02-01
A method to measure the thermal diffusivity of reactor fuels during irradiation is developed, based on a time-dependent heat diffusion equation. With this technique the temperature is measured at only one point in the fuel specimen. This method has the advantage that it is not necessary to know the heat generation (a difficult evaluation during irradiation). The theory includes realistic boundary conditions, applicable to actual experimental systems. The parameters are the time constants associated with the first two time modes in the temperature-vs-time curve resulting from a step change in heat input to the specimen. With the time constants andmore » the necessary material properties and dimensions of the specimen and specimen holder, the thermal diffusivity of the specimen can be calculated.« less
Estimation of electric fields and current from ground-based magnetometer data
NASA Technical Reports Server (NTRS)
Kamide, Y.; Richmond, A. D.
1984-01-01
Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.
Mars topographic clouds: MAVEN/IUVS observations and LMD MGCM predictions
NASA Astrophysics Data System (ADS)
Schneider, Nicholas M.; Connour, Kyle; Forget, Francois; Deighan, Justin; Jain, Sonal; Vals, Margaux; Wolff, Michael J.; Chaffin, Michael S.; Crismani, Matteo; Stewart, A. Ian F.; McClintock, William E.; Holsclaw, Greg; Lefevre, Franck; Montmessin, Franck; Stiepen, Arnaud; Stevens, Michael H.; Evans, J. Scott; Yelle, Roger; Lo, Daniel; Clarke, John T.; Jakosky, Bruce
2017-10-01
The Imaging Ultraviolet Spectrograph (IUVS) instrument on the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft takes mid-UV spectral images of the Martian atmosphere. From these apoapse disk images, information about clouds and aerosols can be retrieved and comprise the only MAVEN observations of topographic clouds and cloud morphologies. Measuring local time variability of large-scale recurring cloud features is made possible with MAVEN’s ~4.5-hour elliptical orbit, something not possible with sun-synchronous orbits. We have run the LMD MGCM (Mars global circulation model) at 1° x 1° resolution to simulate water ice cloud formation with inputs consistent with observing parameters and Mars seasons. Topographic clouds are observed to form daily during the late mornings of northern hemisphere spring and this phenomenon recurs until late summer (Ls = 160°), after which topographic clouds wane in thickness. By northern fall, most topographic clouds cease to form except over Arsia Mons and Pavonis Mons, where clouds can still be observed. Our data show moderate cloud formation over these regions as late as Ls = 220°, something difficult for the model to replicate. Previous studies have shown that models have trouble simulating equatorial cloud thickness in combination with a realistic amount of water vapor and not-too-thick polar water ice clouds, implying aspects of the water cycle are not fully understood. We present data/model comparisons as well as further refinements on parameter inputs based on IUVS observations.
Mars topographic clouds: MAVEN/IUVS observations and LMD MGCM predictions
NASA Astrophysics Data System (ADS)
Connour, K.; Schneider, N.; Forget, F.; Deighan, J.; Jain, S.; Pottier, A.; Wolff, M. J.; Chaffin, M.; Crismani, M. M. J.; Stewart, I. F.; McClintock, B.; Holsclaw, G.; Lefèvre, F.; Montmessin, F.; Stiepen, A.; Stevens, M. H.; Evans, J. S.; Yelle, R. V.; Lo, D.; Clarke, J. T.; Jakosky, B. M.
2017-12-01
The Imaging Ultraviolet Spectrograph (IUVS) instrument on the Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft takes mid-UV spectral images of the Martian atmosphere. From these apoapse disk images, information about clouds and aerosols can be retrieved and comprise the only MAVEN observations of topographic clouds and cloud morphologies. Measuring local time variability of large-scale recurring cloud features is made possible with MAVEN's 4.5-hour elliptical orbit, something not possible with sun-synchronous orbits. We have run the LMD MGCM (Mars global circulation model) at 1° x 1° resolution to simulate water ice cloud formation with inputs consistent with observing parameters and Mars seasons. Topographic clouds are observed to form daily during the late mornings of northern hemisphere spring and this phenomenon recurs until late summer (Ls = 160°), after which topographic clouds wane in thickness. By northern fall, most topographic clouds cease to form except over Arsia Mons and Pavonis Mons, where clouds can still be observed. Our data show moderate cloud formation over these regions as late as Ls = 220°, something difficult for the model to replicate. Previous studies have shown that models have trouble simulating equatorial cloud thickness in combination with a realistic amount of water vapor and not-too-thick polar water ice clouds, implying aspects of the water cycle are not fully understood. We present data/model comparisons as well as further refinements on parameter inputs based on IUVS observations.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
[Finite Element Modelling of the Eye for the Investigation of Accommodation].
Martin, H; Stachs, O; Guthoff, R; Grabow, N
2016-12-01
Background: Accommodation research increasingly uses engineering methods. This article presents the use of the finite element method in accommodation research. Material and Methods: Geometry, material data and boundary conditions are prerequisites for the application of the finite element method. Published data on geometry and materials are reviewed. It is shown how boundary conditions are important and how they influence the results. Results: Two dimensional and three dimensional models of the anterior chamber of the eye are presented. With simple two dimensional models, it is shown that realistic results for the accommodation amplitude can always be achieved. More complex three dimensional models of the accommodation mechanism - including the ciliary muscle - require further investigations of the material data and of the morphology of the ciliary muscle, if they are to achieve realistic results for accommodation. Discussion and Conclusion: The efficiency and the limitations of the finite element method are especially clear for accommodation. Application of the method requires extensive preparation, including acquisition of geometric and material data and experimental validation. However, a validated model can be used as a basis for parametric studies, by systematically varying material data and geometric dimensions. This allows systematic investigation of how essential input parameters influence the results. Georg Thieme Verlag KG Stuttgart · New York.
Validation of a Thermo-Ablative Model of Elastomeric Internal Insulation Materials
NASA Technical Reports Server (NTRS)
Martin, Heath T.
2017-01-01
In thermo-ablative material modeling, as in many fields of analysis, the quality of the existing models significantly exceeds that of the experimental data required for their validation. In an effort to narrow this gap, a laboratory-scale internal insulation test bed was developed that exposes insulation samples to realistic solid rocket motor (SRM) internal environments while being instrumented to record real-time rates of both model inputs (i.e., chamber pressure, total surface heat flux, and radiative heat flux) as well as model outputs (i.e., material decomposition depths (MDDs) and in-depth material temperatures). In this work, the measured SRM internal environment parameters were used in conjunction with equilibrium thermochemistry codes as inputs to one-dimensional thermo-ablative models of the PBINBR and CFEPDM insulation samples used in the lab-scale test firings. The computed MDD histories were then compared with those deduced from real-time X-ray radiography of the insulation samples, and the calculated in-depth temperatures were compared with those measured by embedded thermocouples. The results of this exercise emphasize the challenges of modeling and testing elastomeric materials in SRM environments while illuminating the path forward to improved fidelity.
Evolution of optimal Hill coefficients in nonlinear public goods games.
Archetti, Marco; Scheuring, István
2016-10-07
In evolutionary game theory, the effect of public goods like diffusible molecules has been modelled using linear, concave, sigmoid and step functions. The observation that biological systems are often sigmoid input-output functions, as described by the Hill equation, suggests that a sigmoid function is more realistic. The Michaelis-Menten model of enzyme kinetics, however, predicts a concave function, and while mechanistic explanations of sigmoid kinetics exist, we lack an adaptive explanation: what is the evolutionary advantage of a sigmoid benefit function? We analyse public goods games in which the shape of the benefit function can evolve, in order to determine the optimal and evolutionarily stable Hill coefficients. We find that, while the dynamics depends on whether output is controlled at the level of the individual or the population, intermediate or high Hill coefficients often evolve, leading to sigmoid input-output functions that for some parameters are so steep to resemble a step function (an on-off switch). Our results suggest that, even when the shape of the benefit function is unknown, biological public goods should be modelled using a sigmoid or step function rather than a linear or concave function. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Adaptive Resonance Theory account of the implicit learning of orthographic word forms.
Glotin, H; Warnier, P; Dandurand, F; Dufau, S; Lété, B; Touzet, C; Ziegler, J C; Grainger, J
2010-01-01
An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an unordered set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children's exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read. 2009 Elsevier Ltd. All rights reserved.
Experimental and modeling uncertainties in the validation of lower hybrid current drive
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poli, F. M.; Bonoli, P. T.; Chilenski, M.
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
Entanglement of remote material qubits through nonexciting interaction with single photons
NASA Astrophysics Data System (ADS)
Li, Gang; Zhang, Pengfei; Zhang, Tiancai
2018-05-01
We propose a scheme to entangle multiple material qubits through interaction with single photons via nonexciting processes associated with strongly coupling systems. The basic idea is based on the material state dependent reflection and transmission for the input photons. Thus, the material qubits in several systems can be entangled when one photon interacts with each system in cascade and the photon paths are mixed by the photon detection. The character of nonexciting of material qubits does not change the state of the material qubit and thus ensures the possibility of purifying entangled states by using more photons under realistic imperfect parameters. It also guarantees directly scaling up the scheme to entangle more qubits. Detailed analysis of fidelity and success probability of the scheme in the frame of an optical Fabry-Pérot cavity based strongly coupling system is presented. It is shown that a two-qubit entangled state with fidelity above 0.99 is promised with only two photons by using currently feasible experimental parameters. Our scheme can also be directly implemented on other strongly coupled system.
Shape of the human nasal cavity promotes retronasal smell
NASA Astrophysics Data System (ADS)
Trastour, Sophie; Melchionna, Simone; Mishra, Shruti; Zwicker, David; Lieberman, Daniel E.; Kaxiras, Efthimios; Brenner, Michael P.
2015-11-01
Humans are exceptionally good at perceiving the flavor of food. Flavor includes sensory input from taste receptors but is dominated by olfactory (smell) receptors. To smell food while eating, odors must be transported to the nasal cavity during exhalation. Olfactory performance of this retronasal route depends, among other factors, on the position of the olfactory receptors and the shape of the nasal cavity. One biological hypothesis is that the derived configuration of the human nasal cavity has resulted in a greater capacity for retronasal smell, hence enhanced flavor perception. We here study the air flow and resulting odor deposition as a function of the nasal geometry and the parameters of exhalation. We perform computational fluid dynamics simulations in realistic geometries obtained from CT scans of humans. Using the resulting flow fields, we then study the deposition of tracer particles in the nasal cavity. Additionally, we derive scaling laws for the odor deposition rate as a function of flow parameters and geometry using boundary layer theory. These results allow us to assess which changes in the evolution of the human nose led to significant improvements of retronasal smell.
Experimental and modeling uncertainties in the validation of lower hybrid current drive
Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...
2016-07-28
Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less
ERIC Educational Resources Information Center
Zacharos, Konstantinos; Koustourakis, Gerassimos
2011-01-01
The reference contexts that accompany the "realistic" problems chosen for teaching mathematical concepts in the first school grades play a major educational role. However, choosing "realistic" problems in teaching is a complex process that must take into account various pedagogical, sociological and psychological parameters.…
Fang, Yibin; Yu, Ying; Cheng, Jiyong; Wang, Shengzhang; Wang, Kuizhong; Liu, Jian-Min; Huang, Qinghai
2013-01-01
Adjusting hemodynamics via flow diverter (FD) implantation is emerging as a novel method of treating cerebral aneurysms. However, most previous FD-related hemodynamic studies were based on virtual FD deployment, which may produce different hemodynamic outcomes than realistic (in vivo) FD deployment. We compared hemodynamics between virtual FD and realistic FD deployments in rabbit aneurysm models using computational fluid dynamics (CFD) simulations. FDs were implanted for aneurysms in 14 rabbits. Vascular models based on rabbit-specific angiograms were reconstructed for CFD studies. Real FD configurations were reconstructed based on micro-CT scans after sacrifice, while virtual FD configurations were constructed with SolidWorks software. Hemodynamic parameters before and after FD deployment were analyzed. According to the metal coverage (MC) of implanted FDs calculated based on micro-CT reconstruction, 14 rabbits were divided into two groups (A, MC >35%; B, MC <35%). Normalized mean wall shear stress (WSS), relative residence time (RRT), inflow velocity, and inflow volume in Group A were significantly different (P<0.05) from virtual FD deployment, but pressure was not (P>0.05). The normalized mean WSS in Group A after realistic FD implantation was significantly lower than that of Group B. All parameters in Group B exhibited no significant difference between realistic and virtual FDs. This study confirmed MC-correlated differences in hemodynamic parameters between realistic and virtual FD deployment. PMID:23823503
Yılmaz, Bülent; Çiftçi, Emre
2013-06-01
Extracorporeal Shock Wave Lithotripsy (ESWL) is based on disintegration of the kidney stone by delivering high-energy shock waves that are created outside the body and transmitted through the skin and body tissues. Nowadays high-energy shock waves are also used in orthopedic operations and investigated to be used in the treatment of myocardial infarction and cancer. Because of these new application areas novel lithotriptor designs are needed for different kinds of treatment strategies. In this study our aim was to develop a versatile computer simulation environment which would give the device designers working on various medical applications that use shock wave principle a substantial amount of flexibility while testing the effects of new parameters such as reflector size, material properties of the medium, water temperature, and different clinical scenarios. For this purpose, we created a finite-difference time-domain (FDTD)-based computational model in which most of the physical system parameters were defined as an input and/or as a variable in the simulations. We constructed a realistic computational model of a commercial electrohydraulic lithotriptor and optimized our simulation program using the results that were obtained by the manufacturer in an experimental setup. We, then, compared the simulation results with the results from an experimental setup in which oxygen level in water was varied. Finally, we studied the effects of changing the input parameters like ellipsoid size and material, temperature change in the wave propagation media, and shock wave source point misalignment. The simulation results were consistent with the experimental results and expected effects of variation in physical parameters of the system. The results of this study encourage further investigation and provide adequate evidence that the numerical modeling of a shock wave therapy system is feasible and can provide a practical means to test novel ideas in new device design procedures. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Improved meteorology from an updated WRF/CMAQ modeling ...
Realistic vegetation characteristics and phenology from the Moderate Resolution Imaging Spectroradiometer (MODIS) products improve the simulation for the meteorology and air quality modeling system WRF/CMAQ (Weather Research and Forecasting model and Community Multiscale Air Quality model) that employs the Pleim-Xiu land surface model (PX LSM). Recently, PX LSM WRF/CMAQ has been updated in vegetation, soil, and boundary layer processes resulting in improved 2 m temperature (T) and mixing ratio (Q), 10 m wind speed, and surface ozone simulations across the domain compared to the previous version for a period around August 2006. Yearlong meteorology simulations with the updated system demonstrate that MODIS input helps reduce bias of the 2 m Q estimation during the growing season from April to September. Improvements follow the green-up in the southeast from April and move toward the west and north through August. From October to March, MODIS input does not have much influence on the system because vegetation is not as active. The greatest effects of MODIS input include more accurate phenology, better representation of leaf area index (LAI) for various forest ecosystems and agricultural areas, and realistically sparse vegetation coverage in the western drylands. Despite the improved meteorology, MODIS input causes higher bias for the surface O3 simulation in April, August, and October in areas where MODIS LAI is much less than the base LAI. Thus, improvement
Comparing Realistic Subthalamic Nucleus Neuron Models
NASA Astrophysics Data System (ADS)
Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.
2011-06-01
The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.
NASA Astrophysics Data System (ADS)
Qian, Guian; Lei, Wei-Sheng; Niffenegger, M.; González-Albuixech, V. F.
2018-04-01
The work relates to the effect of temperature on the model parameters in local approaches (LAs) to cleavage fracture. According to a recently developed LA model, the physical consensus of plastic deformation being a prerequisite to cleavage fracture enforces any LA model of cleavage fracture to observe initial yielding of a volume element as its threshold stress state to incur cleavage fracture in addition to the conventional practice of confining the fracture process zone within the plastic deformation zone. The physical consistency of the new LA model to the basic LA methodology and the differences between the new LA model and other existing models are interpreted. Then this new LA model is adopted to investigate the temperature dependence of LA model parameters using circumferentially notched round tensile specimens. With the published strength data as input, finite element (FE) calculation is conducted for elastic-perfectly plastic deformation and the realistic elastic-plastic hardening, respectively, to provide stress distributions for model calibration. The calibration results in temperature independent model parameters. This leads to the establishment of a 'master curve' characteristic to synchronise the correlation between the nominal strength and the corresponding cleavage fracture probability at different temperatures. This 'master curve' behaviour is verified by strength data from three different steels, providing a new path to calculate cleavage fracture probability with significantly reduced FE efforts.
Modeling scintillator and WLS fiber signals for fast Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Sánchez, F. A.; Medina-Tanco, G.
2010-08-01
In this work we present a fast, robust and flexible procedure to simulate electronic signals of scintillator units: plastic scintillator material embedded with a wavelength shifter optical fiber coupled to a photo-multiplier tube which, in turn, is plugged to a front-end electronic board. The simple rationale behind the simulation chain allows to adapt the procedure to a broad range of detectors based on that kind of units. We show that, in order to produce realistic results, the simulation parameters can be properly calibrated against laboratory measurements and used thereafter as input of the simulations. Simulated signals of atmospheric background cosmic ray muons are presented and their main features analyzed and validated using actual measured data. Conversely, for any given practical application, the present simulation scheme can be used to find an adequate combination of photo-multiplier tube and optical fiber at the prototyping stage.
Potential release of fibers from burning carbon composites. [aircraft fires
NASA Technical Reports Server (NTRS)
Bell, V. L.
1980-01-01
A comprehensive experimental carbon fiber source program was conducted to determine the potential for the release of conductive carbon fibers from burning composites. Laboratory testing determined the relative importance of several parameters influencing the amounts of single fibers released, while large-scale aviation jet fuel pool fires provided realistic confirmation of the laboratory data. The dimensions and size distributions of fire-released carbon fibers were determined, not only for those of concern in an electrical sense, but also for those of potential interest from a health and environmental standpoint. Fire plume and chemistry studies were performed with large pool fires to provide an experimental input into an analytical modelling of simulated aircraft crash fires. A study of a high voltage spark system resulted in a promising device for the detection, counting, and sizing of electrically conductive fibers, for both active and passive modes of operation.
Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.
Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G
2017-02-17
Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.
Vortices in high-performance high-temperature superconductors
Kwok, Wai-Kwong; Welp, Ulrich; Glatz, Andreas; ...
2016-09-21
The behavior of vortex matter in high-temperature superconductors (HTS) controls the entire electromagnetic response of the material, including its current carrying capacity. In this paper, we review the basic concepts of vortex pinning and its application to a complex mixed pinning landscape to enhance the critical current and to reduce its anisotropy. We focus on recent scientific advances that have resulted in large enhancements of the in-field critical current in state-of-the-art second generation (2G) YBCO coated conductors and on the prospect of an isotropic, high-critical current superconductor in the iron-based superconductors. Finally, we discuss an emerging new paradigm of criticalmore » current by design—a drive to achieve a quantitative correlation between the observed critical current density and mesoscale mixed pinning landscapes by using realistic input parameters in an innovative and powerful large-scale time dependent Ginzburg–Landau approach to simulating vortex dynamics.« less
ConfocalGN: A minimalistic confocal image generator
NASA Astrophysics Data System (ADS)
Dmitrieff, Serge; Nédélec, François
Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.
A framework for risk assessment and decision-making strategies in dangerous good transportation.
Fabiano, B; Currò, F; Palazzi, E; Pastorino, R
2002-07-01
The risk from dangerous goods transport by road and strategies for selecting road load/routes are faced in this paper, by developing an original site-oriented framework of general applicability at local level. A realistic evaluation of the frequency must take into account on one side inherent factors (e.g. tunnels, rail bridges, bend radii, slope, characteristics of neighborhood, etc.) on the other side factors correlated to the traffic conditions (e.g. dangerous goods trucks, etc.). Field data were collected on the selected highway, by systematic investigation, providing input data for a database reporting tendencies and intrinsic parameter/site-oriented statistics. The developed technique was applied to a pilot area, considering both the individual risk and societal risk and making reference to flammable and explosive scenarios. In this way, a risk assessment, sensitive to route features and population exposed, is proposed, so that the overall uncertainties in risk analysis can be lowered.
A Robust Molecular Network Motif for Period-Doubling Devices.
Cuba Samaniego, Christian; Franco, Elisa
2018-01-19
Life is sustained by a variety of cyclic processes such as cell division, muscle contraction, and neuron firing. The periodic signals powering these processes often direct a variety of other downstream systems, which operate at different time scales and must have the capacity to divide or multiply the period of the master clock. Period modulation is also an important challenge in synthetic molecular systems, where slow and fast components may have to be coordinated simultaneously by a single oscillator whose frequency is often difficult to tune. Circuits that can multiply the period of a clock signal (frequency dividers), such as binary counters and flip-flops, are commonly encountered in electronic systems, but design principles to obtain similar devices in biological systems are still unclear. We take inspiration from the architecture of electronic flip-flops, and we propose to build biomolecular period-doubling networks by combining a bistable switch with negative feedback modules that preprocess the circuit inputs. We identify a network motif and we show it can be "realized" using different biomolecular components; two of the realizations we propose rely on transcriptional gene networks and one on nucleic acid strand displacement systems. We examine the capacity of each realization to perform period-doubling by studying how bistability of the motif is affected by the presence of the input; for this purpose, we employ mathematical tools from algebraic geometry that provide us with valuable insights on the input/output behavior as a function of the realization parameters. We show that transcriptional network realizations operate correctly also in a stochastic regime when processing oscillations from the repressilator, a canonical synthetic in vivo oscillator. Finally, we compare the performance of different realizations in a range of realistic parameters via numerical sensitivity analysis of the period-doubling region, computed with respect to the input period and amplitude. Our mathematical and computational analysis suggests that the motif we propose is generally robust with respect to specific implementation details: functionally equivalent circuits can be built as long as the species-interaction topology is respected. This indicates that experimental construction of the circuit is possible with a variety of components within the rapidly expanding libraries available in synthetic biology.
Viger, Roland J.; Hay, Lauren E.; Jones, John W.; Buell, Gary R.
2010-01-01
This report documents an extension of the Precipitation Runoff Modeling System that accounts for the effect of a large number of water-holding depressions in the land surface on the hydrologic response of a basin. Several techniques for developing the inputs needed by this extension also are presented. These techniques include the delineation of the surface depressions, the generation of volume estimates for the surface depressions, and the derivation of model parameters required to describe these surface depressions. This extension is valuable for applications in basins where surface depressions are too small or numerous to conveniently model as discrete spatial units, but where the aggregated storage capacity of these units is large enough to have a substantial effect on streamflow. In addition, this report documents several new model concepts that were evaluated in conjunction with the depression storage functionality, including: ?hydrologically effective? imperviousness, rates of hydraulic conductivity, and daily streamflow routing. All of these techniques are demonstrated as part of an application in the Upper Flint River Basin, Georgia. Simulated solar radiation, potential evapotranspiration, and water balances match observations well, with small errors for the first two simulated data in June and August because of differences in temperatures from the calibration and evaluation periods for those months. Daily runoff simulations show increasing accuracy with streamflow and a good fit overall. Including surface depression storage in the model has the effect of decreasing daily streamflow for all but the lowest flow values. The report discusses the choices and resultant effects involved in delineating and parameterizing these features. The remaining enhancements to the model and its application provide a more realistic description of basin geography and hydrology that serve to constrain the calibration process to more physically realistic parameter values.
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms
Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159
NASA Astrophysics Data System (ADS)
Yang, Masaki J. S.
2017-03-01
In this paper, we attempt to build a unified model with the democratic texture, that has some unification between up-type Yukawa interactions Yν and Yu . Since the S3 L×S3 R flavor symmetry is chiral, the unified gauge group is assumed to be Pati-Salam type S U (4 )c×S U (2 )L×S U (2 )R. The breaking scheme of the flavor symmetry is considered to be S3 L×S3 R→S2 L×S2 R→0 . In this picture, the four-zero texture is desirable for realistic masses and mixings. This texture is realized by a specific representation for the second breaking of the S3 L×S3 R flavor symmetry. Assuming only renormalizable Yukawa interactions, type-I seesaw mechanism, and neglecting C P phases for simplicity, the right-handed neutrino mass matrix MR can be reconstructed from low energy input values. Numerical analysis shows that the texture of MR basically behaves like the "waterfall texture." Since MR tends to be the "cascade texture" in the democratic texture approach, a model with type-I seesaw and up-type Yukawa unification Yν≃Yu basically requires fine-tunings between parameters. Therefore, it seems to be more realistic to consider universal waterfall textures for both Yf and MR, e.g., by the radiative mass generation or the Froggatt-Nielsen mechanism. Moreover, analysis of eigenvalues shows that the lightest mass eigenvalue MR 1 is too light to achieve successful thermal leptogenesis. Although the resonant leptogenesis might be possible, it also requires fine-tunings of parameters.
Universal Responses of Cyclic-Oxidation Models Studied
NASA Technical Reports Server (NTRS)
Smialek, James L.
2003-01-01
Oxidation is an important degradation process for materials operating in the high-temperature air or oxygen environments typical of jet turbine or rocket engines. Reaction of the combustion gases with the component material forms surface layer scales during these oxidative exposures. Typically, the instantaneous rate of reaction is inversely proportional to the existing scale thickness, giving rise to parabolic kinetics. However, more realistic applications entail periodic startup and shutdown. Some scale spallation may occur upon cooling, resulting in loss of the protective diffusion barrier provided by a fully intact scale. Upon reheating, the component will experience accelerated oxidation due to this spallation. Cyclic-oxidation testing has, therefore, been a mainstay of characterization and performance ranking for high-temperature materials. Models simulate this process by calculating how a scale spalls upon cooling and regrows upon heating (refs. 1 to 3). Recently released NASA software (COSP for Windows) allows researchers to specify a uniform layer or discrete segments of spallation (ref. 4). Families of model curves exhibit consistent regularity and trends with input parameters, and characteristic features have been empirically described in terms of these parameters. Although much insight has been gained from experimental and model curves, no equation has been derived that can describe this behavior explicitly as functions of the key oxidation parameters.
Lattice QCD inputs to the CKM unitarity triangle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laiho, Jack; Department of Physics and Astronomy, University of Glasgow, Glasgow, G128 QQ; Lunghi, E.
2010-02-01
We perform a global fit to the Cabibbo-Kobayashi-Maskawa unitarity triangle using the latest experimental and theoretical constraints. Our emphasis is on the hadronic weak matrix elements that enter the analysis, which must be computed using lattice QCD or other nonperturbative methods. Realistic lattice QCD calculations which include the effects of the dynamical up, down, and strange quarks are now available for all of the standard inputs to the global fit. We therefore present lattice averages for all of the necessary hadronic weak matrix elements. We attempt to account for correlations between lattice QCD results in a reasonable but conservative manner:more » whenever there are reasons to believe that an error is correlated between two lattice calculations, we take the degree of correlation to be 100%. These averages are suitable for use as inputs both in the global Cabibbo-Kobayashi-Maskawa unitarity triangle fit and other phenomenological analyses. In order to illustrate the impact of the lattice averages, we make standard model predictions for the parameters B-circumflex{sub K}, |V{sub cb}|, and |V{sub ub}|/|V{sub cb}|. We find a (2-3){sigma} tension in the unitarity triangle, depending upon whether we use the inclusive or exclusive determination of |V{sub cb}|. If we interpret the tension as a sign of new physics in either neutral kaon or B mixing, we find that the scenario with new physics in kaon mixing is preferred by present data.« less
Lattice QCD Inputs to the CKM Unitarity Triangle Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van de Water, R.; Lunghi, E; Laiho, J
2010-02-02
We perform a global fit to the Cabibbo-Kobayashi-Maskawa unitarity triangle using the latest experimental and theoretical constraints. Our emphasis is on the hadronic weak matrix elements that enter the analysis, which must be computed using lattice QCD or other nonperturbative methods. Realistic lattice QCD calculations which include the effects of the dynamical up, down, and strange quarks are now available for all of the standard inputs to the global fit. We therefore present lattice averages for all of the necessary hadronic weak matrix elements. We attempt to account for correlations between lattice QCD results in a reasonable but conservative manner:more » whenever there are reasons to believe that an error is correlated between two lattice calculations, we take the degree of correlation to be 100%. These averages are suitable for use as inputs both in the global Cabibbo-Kobayashi-Maskawa unitarity triangle fit and other phenomenological analyses. In order to illustrate the impact of the lattice averages, we make standard model predictions for the parameters B{sub K}, |V{sub cb}|, and |V{sub ub}|/|Vcb|. We find a (2-3){sigma} tension in the unitarity triangle, depending upon whether we use the inclusive or exclusive determination of |V{sub cb}|. If we interpret the tension as a sign of new physics in either neutral kaon or B mixing, we find that the scenario with new physics in kaon mixing is preferred by present data.« less
Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval
NASA Astrophysics Data System (ADS)
Cao, Y.; Xu, L.; Peng, J.
2018-04-01
Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, J.; Winkler, J.; Christensen, D.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less
Galaxy And Mass Assembly (GAMA): M_star - R_e relations of z = 0 bulges, discs and spheroids
NASA Astrophysics Data System (ADS)
Lange, Rebecca; Moffett, Amanda J.; Driver, Simon P.; Robotham, Aaron S. G.; Lagos, Claudia del P.; Kelvin, Lee S.; Conselice, Christopher; Margalef-Bentabol, Berta; Alpaslan, Mehmet; Baldry, Ivan; Bland-Hawthorn, Joss; Bremer, Malcolm; Brough, Sarah; Cluver, Michelle; Colless, Matthew; Davies, Luke J. M.; Häußler, Boris; Holwerda, Benne W.; Hopkins, Andrew M.; Kafle, Prajwal R.; Kennedy, Rebecca; Liske, Jochen; Phillipps, Steven; Popescu, Cristina C.; Taylor, Edward N.; Tuffs, Richard; van Kampen, Eelco; Wright, Angus H.
2016-10-01
We perform automated bulge + disc decomposition on a sample of ˜7500 galaxies from the Galaxy And Mass Assembly (GAMA) survey in the redshift range of 0.002 < z < 0.06 using Structural Investigation of Galaxies via Model Analysis, a wrapper around GALFIT3. To achieve robust profile measurements, we use a novel approach of repeatedly fitting the galaxies, varying the input parameters to sample a large fraction of the input parameter space. Using this method, we reduce the catastrophic failure rate significantly and verify the confidence in the fit independently of χ2. Additionally, using the median of the final fitting values and the 16th and 84th percentile produces more realistic error estimates than those provided by GALFIT, which are known to be underestimated. We use the results of our decompositions to analyse the stellar mass - half-light radius relations of bulges, discs and spheroids. We further investigate the association of components with a parent disc or elliptical relation to provide definite z = 0 disc and spheroid M_star - R_e relations. We conclude by comparing our local disc and spheroid M_star - R_e to simulated data from EAGLE and high-redshift data from Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey-Ultra Deep Survey. We show the potential of using the M_star - R_e relation to study galaxy evolution in both cases but caution that for a fair comparison, all data sets need to be processed and analysed in the same manner.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
Realistic Data-Driven Traffic Flow Animation Using Texture Synthesis.
Chao, Qianwen; Deng, Zhigang; Ren, Jiaping; Ye, Qianqian; Jin, Xiaogang
2018-02-01
We present a novel data-driven approach to populate virtual road networks with realistic traffic flows. Specifically, given a limited set of vehicle trajectories as the input samples, our approach first synthesizes a large set of vehicle trajectories. By taking the spatio-temporal information of traffic flows as a 2D texture, the generation of new traffic flows can be formulated as a texture synthesis process, which is solved by minimizing a newly developed traffic texture energy. The synthesized output captures the spatio-temporal dynamics of the input traffic flows, and the vehicle interactions in it strictly follow traffic rules. After that, we position the synthesized vehicle trajectory data to virtual road networks using a cage-based registration scheme, where a few traffic-specific constraints are enforced to maintain each vehicle's original spatial location and synchronize its motion in concert with its neighboring vehicles. Our approach is intuitive to control and scalable to the complexity of virtual road networks. We validated our approach through many experiments and paired comparison user studies.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
DREAM-3D and the importance of model inputs and boundary conditions
NASA Astrophysics Data System (ADS)
Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue
2015-04-01
Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.
Multiphase porous media modelling: A novel approach to predicting food processing performance.
Khan, Md Imran H; Joardder, M U H; Kumar, Chandan; Karim, M A
2018-03-04
The development of a physics-based model of food processing is essential to improve the quality of processed food and optimize energy consumption. Food materials, particularly plant-based food materials, are complex in nature as they are porous and have hygroscopic properties. A multiphase porous media model for simultaneous heat and mass transfer can provide a realistic understanding of transport processes and thus can help to optimize energy consumption and improve food quality. Although the development of a multiphase porous media model for food processing is a challenging task because of its complexity, many researchers have attempted it. The primary aim of this paper is to present a comprehensive review of the multiphase models available in the literature for different methods of food processing, such as drying, frying, cooking, baking, heating, and roasting. A critical review of the parameters that should be considered for multiphase modelling is presented which includes input parameters, material properties, simulation techniques and the hypotheses. A discussion on the general trends in outcomes, such as moisture saturation, temperature profile, pressure variation, and evaporation patterns, is also presented. The paper concludes by considering key issues in the existing multiphase models and future directions for development of multiphase models.
Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maggiora, R.; Milanesio, D.; Vecchi, G.
2009-11-26
TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less
Blade Tip Rubbing Stress Prediction
NASA Technical Reports Server (NTRS)
Davis, Gary A.; Clough, Ray C.
1991-01-01
An analytical model was constructed to predict the magnitude of stresses produced by rubbing a turbine blade against its tip seal. This model used a linearized approach to the problem, after a parametric study, found that the nonlinear effects were of insignificant magnitude. The important input parameters to the model were: the arc through which rubbing occurs, the turbine rotor speed, normal force exerted on the blade, and the rubbing coefficient of friction. Since it is not possible to exactly specify some of these parameters, values were entered into the model which bracket likely values. The form of the forcing function was another variable which was impossible to specify precisely, but the assumption of a half-sine wave with a period equal to the duration of the rub was taken as a realistic assumption. The analytical model predicted resonances between harmonics of the forcing function decomposition and known harmonics of the blade. Thus, it seemed probable that blade tip rubbing could be at least a contributor to the blade-cracking phenomenon. A full-scale, full-speed test conducted on the space shuttle main engine high pressure fuel turbopump Whirligig tester was conducted at speeds between 33,000 and 28,000 RPM to confirm analytical predictions.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons
Buhusi, Catalin V.; Oprisan, Sorinel A.
2013-01-01
In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Bhat, Kabekode Ghanasham
2017-07-18
We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Experiments on Linguistically-Based Term Associations.
ERIC Educational Resources Information Center
Ruge, Gerda
1992-01-01
Describes the hyperterm system REALIST (Retrieval Aids by Linguistics and Statistics) with emphasis on its semantic component, which generates term relations from free-text input. Experiments with various similarity measures are discussed, and the quality of the associated terms is evaluated using term recall and term precision measures. (22…
Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes
NASA Technical Reports Server (NTRS)
Capo, M. A.; Disney, R. K.
1971-01-01
Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.
Application and Evaluation of MODIS LAI, FPAR, and Albedo Products in the WRF/CMAQ System
MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temper...
USDA-ARS?s Scientific Manuscript database
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
Is realistic neuronal modeling realistic?
Almog, Mara
2016-01-01
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. PMID:27535372
Integrating Brain and Biomechanical Models—A New Paradigm for Understanding Neuro-muscular Control
James, Sebastian S.; Papapavlou, Chris; Blenkinsop, Alexander; Cope, Alexander J.; Anderson, Sean R.; Moustakas, Konstantinos; Gurney, Kevin N.
2018-01-01
To date, realistic models of how the central nervous system governs behavior have been restricted in scope to the brain, brainstem or spinal column, as if these existed as disembodied organs. Further, the model is often exercised in relation to an in vivo physiological experiment with input comprising an impulse, a periodic signal or constant activation, and output as a pattern of neural activity in one or more neural populations. Any link to behavior is inferred only indirectly via these activity patterns. We argue that to discover the principles of operation of neural systems, it is necessary to express their behavior in terms of physical movements of a realistic motor system, and to supply inputs that mimic sensory experience. To do this with confidence, we must connect our brain models to neuro-muscular models and provide relevant visual and proprioceptive feedback signals, thereby closing the loop of the simulation. This paper describes an effort to develop just such an integrated brain and biomechanical system using a number of pre-existing models. It describes a model of the saccadic oculomotor system incorporating a neuromuscular model of the eye and its six extraocular muscles. The position of the eye determines how illumination of a retinotopic input population projects information about the location of a saccade target into the system. A pre-existing saccadic burst generator model was incorporated into the system, which generated motoneuron activity patterns suitable for driving the biomechanical eye. The model was demonstrated to make accurate saccades to a target luminance under a set of environmental constraints. Challenges encountered in the development of this model showed the importance of this integrated modeling approach. Thus, we exposed shortcomings in individual model components which were only apparent when these were supplied with the more plausible inputs available in a closed loop design. Consequently we were able to suggest missing functionality which the system would require to reproduce more realistic behavior. The construction of such closed-loop animal models constitutes a new paradigm of computational neurobehavior and promises a more thoroughgoing approach to our understanding of the brain's function as a controller for movement and behavior. PMID:29467606
Enzymatic AND logic gates operated under conditions characteristic of biomedical applications.
Melnikov, Dmitriy; Strack, Guinevere; Zhou, Jian; Windmiller, Joshua Ray; Halámek, Jan; Bocharova, Vera; Chuang, Min-Chieh; Santhosh, Padmanabhan; Privman, Vladimir; Wang, Joseph; Katz, Evgeny
2010-09-23
Experimental and theoretical analyses of the lactate dehydrogenase and glutathione reductase based enzymatic AND logic gates in which the enzymes and their substrates serve as logic inputs are performed. These two systems are examples of the novel, previously unexplored class of biochemical logic gates that illustrate potential biomedical applications of biochemical logic. They are characterized by input concentrations at logic 0 and 1 states corresponding to normal and pathophysiological conditions. Our analysis shows that the logic gates under investigation have similar noise characteristics. Both significantly amplify random noise present in inputs; however, we establish that for realistic widths of the input noise distributions, it is still possible to differentiate between the logic 0 and 1 states of the output. This indicates that reliable detection of pathophysiological conditions is indeed possible with such enzyme logic systems.
Significance of Input Correlations in Striatal Function
Yim, Man Yi; Aertsen, Ad; Kumar, Arvind
2011-01-01
The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480
Virtual reality welder training
NASA Astrophysics Data System (ADS)
White, Steven A.; Reiners, Dirk; Prachyabrued, Mores; Borst, Christoph W.; Chambers, Terrence L.
2010-01-01
This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
NASA Astrophysics Data System (ADS)
Georgiou, K.; Tang, J.; Riley, W. J.; Torn, M. S.
2014-12-01
Soil organic matter (SOM) decomposition is regulated by biotic and abiotic processes. Feedback interactions between such processes may act to dampen oscillatory responses to perturbations from equilibrium. Indeed, although biological oscillations have been observed in small-scale laboratory incubations, the overlying behavior at the plot-scale exhibits a relatively stable response to disturbances in input rates and temperature. Recent studies have demonstrated the ability of microbial models to capture nonlinear feedbacks in SOM decomposition that linear Century-type models are unable to reproduce, such as soil priming in response to increased carbon input. However, these microbial models often exhibit strong oscillatory behavior that is deemed unrealistic. The inherently nonlinear dynamics of SOM decomposition have important implications for global climate-carbon and carbon-concentration feedbacks. It is therefore imperative to represent these dynamics in Earth System Models (ESMs) by introducing sub-models that accurately represent microbial and abiotic processes. In the present study we explore, both analytically and numerically, four microbe-enabled model structures of varying levels of complexity. The most complex model combines microbial physiology, a non-linear mineral sorption isotherm, and enzyme dynamics. Based on detailed stability analysis of the nonlinear dynamics, we calculate the system modes as functions of model parameters. This dependence provides insight into the source of state oscillations. We find that feedback mechanisms that emerge from careful representation of enzyme and mineral interactions, with parameter values in a prescribed range, are critical for both maintaining system stability and capturing realistic responses to disturbances. Corroborating and expanding upon the results of recent studies, we explain the emergence of oscillatory responses and discuss the appropriate microbe-enabled model structure for inclusion in ESMs.
Fast repurposing of high-resolution stereo video content for mobile use
NASA Astrophysics Data System (ADS)
Karaoglu, Ali; Lee, Bong Ho; Boev, Atanas; Cheong, Won-Sik; Gotchev, Atanas
2012-06-01
3D video content is captured and created mainly in high resolution targeting big cinema or home TV screens. For 3D mobile devices, equipped with small-size auto-stereoscopic displays, such content has to be properly repurposed, preferably in real-time. The repurposing requires not only spatial resizing but also properly maintaining the output stereo disparity, as it should deliver realistic, pleasant and harmless 3D perception. In this paper, we propose an approach to adapt the disparity range of the source video to the comfort disparity zone of the target display. To achieve this, we adapt the scale and the aspect ratio of the source video. We aim at maximizing the disparity range of the retargeted content within the comfort zone, and minimizing the letterboxing of the cropped content. The proposed algorithm consists of five stages. First, we analyse the display profile, which characterises what 3D content can be comfortably observed in the target display. Then, we perform fast disparity analysis of the input stereoscopic content. Instead of returning the dense disparity map, it returns an estimate of the disparity statistics (min, max, meanand variance) per frame. Additionally, we detect scene cuts, where sharp transitions in disparities occur. Based on the estimated input, and desired output disparity ranges, we derive the optimal cropping parameters and scale of the cropping window, which would yield the targeted disparity range and minimize the area of cropped and letterboxed content. Once the rescaling and cropping parameters are known, we perform resampling procedure using spline-based and perceptually optimized resampling (anti-aliasing) kernels, which have also a very efficient computational structure. Perceptual optimization is achieved through adjusting the cut-off frequency of the anti-aliasing filter with the throughput of the target display.
A Graphical User Interface for a Method to Infer Kinetics and Network Architecture (MIKANA)
Mourão, Márcio A.; Srividhya, Jeyaraman; McSharry, Patrick E.; Crampin, Edmund J.; Schnell, Santiago
2011-01-01
One of the main challenges in the biomedical sciences is the determination of reaction mechanisms that constitute a biochemical pathway. During the last decades, advances have been made in building complex diagrams showing the static interactions of proteins. The challenge for systems biologists is to build realistic models of the dynamical behavior of reactants, intermediates and products. For this purpose, several methods have been recently proposed to deduce the reaction mechanisms or to estimate the kinetic parameters of the elementary reactions that constitute the pathway. One such method is MIKANA: Method to Infer Kinetics And Network Architecture. MIKANA is a computational method to infer both reaction mechanisms and estimate the kinetic parameters of biochemical pathways from time course data. To make it available to the scientific community, we developed a Graphical User Interface (GUI) for MIKANA. Among other features, the GUI validates and processes an input time course data, displays the inferred reactions, generates the differential equations for the chemical species in the pathway and plots the prediction curves on top of the input time course data. We also added a new feature to MIKANA that allows the user to exclude a priori known reactions from the inferred mechanism. This addition improves the performance of the method. In this article, we illustrate the GUI for MIKANA with three examples: an irreversible Michaelis–Menten reaction mechanism; the interaction map of chemical species of the muscle glycolytic pathway; and the glycolytic pathway of Lactococcus lactis. We also describe the code and methods in sufficient detail to allow researchers to further develop the code or reproduce the experiments described. The code for MIKANA is open source, free for academic and non-academic use and is available for download (Information S1). PMID:22096591
A graphical user interface for a method to infer kinetics and network architecture (MIKANA).
Mourão, Márcio A; Srividhya, Jeyaraman; McSharry, Patrick E; Crampin, Edmund J; Schnell, Santiago
2011-01-01
One of the main challenges in the biomedical sciences is the determination of reaction mechanisms that constitute a biochemical pathway. During the last decades, advances have been made in building complex diagrams showing the static interactions of proteins. The challenge for systems biologists is to build realistic models of the dynamical behavior of reactants, intermediates and products. For this purpose, several methods have been recently proposed to deduce the reaction mechanisms or to estimate the kinetic parameters of the elementary reactions that constitute the pathway. One such method is MIKANA: Method to Infer Kinetics And Network Architecture. MIKANA is a computational method to infer both reaction mechanisms and estimate the kinetic parameters of biochemical pathways from time course data. To make it available to the scientific community, we developed a Graphical User Interface (GUI) for MIKANA. Among other features, the GUI validates and processes an input time course data, displays the inferred reactions, generates the differential equations for the chemical species in the pathway and plots the prediction curves on top of the input time course data. We also added a new feature to MIKANA that allows the user to exclude a priori known reactions from the inferred mechanism. This addition improves the performance of the method. In this article, we illustrate the GUI for MIKANA with three examples: an irreversible Michaelis-Menten reaction mechanism; the interaction map of chemical species of the muscle glycolytic pathway; and the glycolytic pathway of Lactococcus lactis. We also describe the code and methods in sufficient detail to allow researchers to further develop the code or reproduce the experiments described. The code for MIKANA is open source, free for academic and non-academic use and is available for download (Information S1).
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
NASA Astrophysics Data System (ADS)
Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.
2015-12-01
Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley
2014-05-01
The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.
Geometry and the Physics of Seasons
ERIC Educational Resources Information Center
Khavrus, Vyacheslav; Shelevytsky, Ihor
2012-01-01
By means of a simple mathematical model recently developed by the authors (2010 "Phys. Educ." 45 641), the passage of the seasons on the Earth is simulated for arbitrary latitudes, taking into account sunlight attenuation in the atmosphere. The method developed can be used to predict a realistic value of the solar energy input (insolation) that…
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-11-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
A simple, approximate model of parachute inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macha, J.M.
1992-01-01
A simple, approximate model of parachute inflation is described. The model is based on the traditional, practical treatment of the fluid resistance of rigid bodies in nonsteady flow, with appropriate extensions to accommodate the change in canopy inflated shape. Correlations for the steady drag and steady radial force as functions of the inflated radius are required as input to the dynamic model. In a novel approach, the radial force is expressed in terms of easily obtainable drag and reefing fine tension measurements. A series of wind tunnel experiments provides the needed correlations. Coefficients associated with the added mass of fluidmore » are evaluated by calibrating the model against an extensive and reliable set of flight data. A parameter is introduced which appears to universally govern the strong dependence of the axial added mass coefficient on motion history. Through comparisons with flight data, the model is shown to realistically predict inflation forces for ribbon and ringslot canopies over a wide range of sizes and deployment conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye Jia; Lawrence Berkeley Laboratory, Berkeley, California 94720-8250; Li Youhong
Theoretical predictions indicate that ordered alloys can spontaneously develop a steady-state nanoscale microstructure when irradiated with energetic particles. This behavior derives from a dynamical competition between disordering in cascades and thermally activated reordering, which leads to self-organization of the chemical order parameter. We test this possibility by combining molecular dynamics (MD) and kinetic Monte Carlo (KMC) simulations. We first generate realistic distributions of disordered zones for Ni{sub 3}Al irradiated with 70 keV He and 1 MeV Kr ions using MD and then input this data into KMC to obtain predictions of steady state microstructures as a function of the irradiationmore » flux. Nanoscale patterning is observed for Kr ion irradiations but not for He ion irradiations. We illustrate, moreover, using image simulations of these KMC microstructures, that high-resolution transmission electron microscopy can be employed to identify nanoscale patterning. Finally, we indicate how this method could be used to synthesize functional thin films, with potential for magnetic applications.« less
Thermodynamic metrics for measuring the ``sustainability'' of design for recycling
NASA Astrophysics Data System (ADS)
Reuter, Markus; van Schaik, Antoinette
2008-08-01
In this article, exergy is applied as a parameter to measure the “sustainability” of a recycling system in addition to the fundamental prediction of material recycling and energy recovery, summarizing a development of over 20 years by the principal author supported by various co-workers, Ph.D., and M.Sc. students. In order to achieve this, recyclate qualities and particle size distributions throughout the system must be predicted as a function of product design, liberation during shredding, process dynamics, physical separation physics, and metallurgical thermodynamics. This crucial development enables the estimation of the true exergy of a recycling system from its inputs and outputs including all its realistic industrial traits. These models have among others been linked to computer aided design tools of the automotive industry and have been used to evaluate the performance of waste electric and electronic equipment recycling systems in The Netherlands. This paper also suggests that the complete system must be optimized to find a “truer” optimum of the material production system linked to the consumer market.
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
Virtual geotechnical laboratory experiments using a simulator
NASA Astrophysics Data System (ADS)
Penumadu, Dayakar; Zhao, Rongda; Frost, David
2000-04-01
The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.
Aircraft noise synthesis system
NASA Technical Reports Server (NTRS)
Mccurdy, David A.; Grandle, Robert E.
1987-01-01
A second-generation Aircraft Noise Synthesis System has been developed to provide test stimuli for studies of community annoyance to aircraft flyover noise. The computer-based system generates realistic, time-varying, audio simulations of aircraft flyover noise at a specified observer location on the ground. The synthesis takes into account the time-varying aircraft position relative to the observer; specified reference spectra consisting of broadband, narrowband, and pure-tone components; directivity patterns; Doppler shift; atmospheric effects; and ground effects. These parameters can be specified and controlled in such a way as to generate stimuli in which certain noise characteristics, such as duration or tonal content, are independently varied, while the remaining characteristics, such as broadband content, are held constant. The system can also generate simulations of the predicted noise characteristics of future aircraft. A description of the synthesis system and a discussion of the algorithms and methods used to generate the simulations are provided. An appendix describing the input data and providing user instructions is also included.
Noise suppression methods for robust speech processing
NASA Astrophysics Data System (ADS)
Boll, S. F.; Ravindra, H.; Randall, G.; Armantrout, R.; Power, R.
1980-05-01
Robust speech processing in practical operating environments requires effective environmental and processor noise suppression. This report describes the technical findings and accomplishments during this reporting period for the research program funded to develop real time, compressed speech analysis synthesis algorithms whose performance in invariant under signal contamination. Fulfillment of this requirement is necessary to insure reliable secure compressed speech transmission within realistic military command and control environments. Overall contributions resulting from this research program include the understanding of how environmental noise degrades narrow band, coded speech, development of appropriate real time noise suppression algorithms, and development of speech parameter identification methods that consider signal contamination as a fundamental element in the estimation process. This report describes the current research and results in the areas of noise suppression using the dual input adaptive noise cancellation using the short time Fourier transform algorithms, articulation rate change techniques, and a description of an experiment which demonstrated that the spectral subtraction noise suppression algorithm can improve the intelligibility of 2400 bps, LPC 10 coded, helicopter speech by 10.6 point.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
Performance Prediction Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chennupati, Gopinath; Santhi, Nanadakishore; Eidenbenz, Stephen
The Performance Prediction Toolkit (PPT), is a scalable co-design tool that contains the hardware and middle-ware models, which accept proxy applications as input in runtime prediction. PPT relies on Simian, a parallel discrete event simulation engine in Python or Lua, that uses the process concept, where each computing unit (host, node, core) is a Simian entity. Processes perform their task through message exchanges to remain active, sleep, wake-up, begin and end. The PPT hardware model of a compute core (such as a Haswell core) consists of a set of parameters, such as clock speed, memory hierarchy levels, their respective sizes,more » cache-lines, access times for different cache levels, average cycle counts of ALU operations, etc. These parameters are ideally read off a spec sheet or are learned using regression models learned from hardware counters (PAPI) data. The compute core model offers an API to the software model, a function called time_compute(), which takes as input a tasklist. A tasklist is an unordered set of ALU, and other CPU-type operations (in particular virtual memory loads and stores). The PPT application model mimics the loop structure of the application and replaces the computational kernels with a call to the hardware model's time_compute() function giving tasklists as input that model the compute kernel. A PPT application model thus consists of tasklists representing kernels and the high-er level loop structure that we like to think of as pseudo code. The key challenge for the hardware model's time_compute-function is to translate virtual memory accesses into actual cache hierarchy level hits and misses.PPT also contains another CPU core level hardware model, Analytical Memory Model (AMM). The AMM solves this challenge soundly, where our previous alternatives explicitly include the L1,L2,L3 hit-rates as inputs to the tasklists. Explicit hit-rates inevitably only reflect the application modeler's best guess, perhaps informed by a few small test problems using hardware counters; also, hard-coded hit-rates make the hardware model insensitive to changes in cache sizes. Alternatively, we use reuse distance distributions in the tasklists. In general, reuse profiles require the application modeler to run a very expensive trace analysis on the real code that realistically can be done at best for small examples.« less
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Passive simulation of the nonlinear port-Hamiltonian modeling of a Rhodes Piano
NASA Astrophysics Data System (ADS)
Falaize, Antoine; Hélie, Thomas
2017-03-01
This paper deals with the time-domain simulation of an electro-mechanical piano: the Fender Rhodes. A simplified description of this multi-physical system is considered. It is composed of a hammer (nonlinear mechanical component), a cantilever beam (linear damped vibrating component) and a pickup (nonlinear magneto-electronic transducer). The approach is to propose a power-balanced formulation of the complete system, from which a guaranteed-passive simulation is derived to generate physically-based realistic sound synthesis. Theses issues are addressed in four steps. First, a class of Port-Hamiltonian Systems is introduced: these input-to-output systems fulfill a power balance that can be decomposed into conservative, dissipative and source parts. Second, physical models are proposed for each component and are recast in the port-Hamiltonian formulation. In particular, a finite-dimensional model of the cantilever beam is derived, based on a standard modal decomposition applied to the Euler-Bernoulli model. Third, these systems are interconnected, providing a nonlinear finite-dimensional Port-Hamiltonian System of the piano. Fourth, a passive-guaranteed numerical method is proposed. This method is built to preserve the power balance in the discrete-time domain, and more precisely, its decomposition structured into conservative, dissipative and source parts. Finally, simulations are performed for a set of physical parameters, based on empirical but realistic values. They provide a variety of audio signals which are perceptively relevant and qualitatively similar to some signals measured on a real instrument.
Simulation Studies of the Effect of Forest Spatial Structure on InSAR Signature
NASA Technical Reports Server (NTRS)
Sun, Guoqing; Liu, Dawei; Ranson, K. Jon; Koetz, Benjamin
2007-01-01
The height of scattering phase retrieved from InSAR data is considered being correlated with the tree height and the spatial structure of the forest stand. Though some researchers have used simple backscattering models to estimate tree height from the height of scattering center, the effect of forest spatial structure on InSAR data is not well understood yet. A three-dimensional coherent radar backscattering model for forest canopies based on realistic three-dimensional scene was used to investigate the effect in this paper. The realistic spatial structure of forest canopies was established either by field measurements (stem map) or through use of forest growth model. Field measurements or a forest growth model parameterized using local environmental parameters provides information of forest species composition and tree sizes in certain growth phases. A fractal tree model (L-system) was used to simulate individual 3- D tree structure of different ages or heights. Trees were positioned in a stand in certain patterns resulting in a 3-D medium of discrete scatterers. The radar coherent backscatter model took the 3-D forest scene as input and simulates the coherent radar backscattering signature. Interferometric SAR images of 3D scenes were simulated and heights of scattering phase centers were estimated from the simulated InSAR data. The effects of tree height, crown cover, crown depth, and the spatial distribution patterns of trees on the scattering phase center were analyzed. The results will be presented in the paper.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Quantum model of light transmission in array waveguide gratings.
Capmany, J; Mora, J; Fernández-Pousa, C R; Muñoz, P
2013-06-17
We develop, to the best of our knowledge, the first model for an array waveguide grating (AWG) device subject to quantum inputs and analyze its basic transformation functionalities for single-photon states. A commercial, cyclic AWG is experimentally characterized with weak input coherent states as a means of exploring its behaviour under realistic quantum detection. In particular it is shown the existence of a cutoff value of the average photon number below which quantum crosstalk between AWG ports is negligible with respect to dark counts. These results can be useful when considering the application of AWG devices to integrated quantum photonic systems.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Field measurement of moisture-buffering model inputs for residential buildings
Woods, Jason; Winkler, Jon
2016-02-05
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
When the Ostrich-Algorithm Fails: Blanking Method Affects Spike Train Statistics.
Joseph, Kevin; Mottaghi, Soheil; Christ, Olaf; Feuerstein, Thomas J; Hofmann, Ulrich G
2018-01-01
Modern electroceuticals are bound to employ the usage of electrical high frequency (130-180 Hz) stimulation carried out under closed loop control, most prominent in the case of movement disorders. However, particular challenges are faced when electrical recordings of neuronal tissue are carried out during high frequency electrical stimulation, both in-vivo and in-vitro . This stimulation produces undesired artifacts and can render the recorded signal only partially useful. The extent of these artifacts is often reduced by temporarily grounding the recording input during stimulation pulses. In the following study, we quantify the effects of this method, "blanking," on the spike count and spike train statistics. Starting from a theoretical standpoint, we calculate a loss in the absolute number of action potentials, depending on: width of the blanking window, frequency of stimulation, and intrinsic neuronal activity. These calculations were then corroborated by actual high signal to noise ratio (SNR) single cell recordings. We state that, for clinically relevant frequencies of 130 Hz (used for movement disorders) and realistic blanking windows of 2 ms, up to 27% of actual existing spikes are lost. We strongly advice cautioned use of the blanking method when spike rate quantification is attempted. Blanking (artifact removal by temporarily grounding input), depending on recording parameters, can lead to significant spike loss. Very careful use of blanking circuits is advised.
When the Ostrich-Algorithm Fails: Blanking Method Affects Spike Train Statistics
Joseph, Kevin; Mottaghi, Soheil; Christ, Olaf; Feuerstein, Thomas J.; Hofmann, Ulrich G.
2018-01-01
Modern electroceuticals are bound to employ the usage of electrical high frequency (130–180 Hz) stimulation carried out under closed loop control, most prominent in the case of movement disorders. However, particular challenges are faced when electrical recordings of neuronal tissue are carried out during high frequency electrical stimulation, both in-vivo and in-vitro. This stimulation produces undesired artifacts and can render the recorded signal only partially useful. The extent of these artifacts is often reduced by temporarily grounding the recording input during stimulation pulses. In the following study, we quantify the effects of this method, “blanking,” on the spike count and spike train statistics. Starting from a theoretical standpoint, we calculate a loss in the absolute number of action potentials, depending on: width of the blanking window, frequency of stimulation, and intrinsic neuronal activity. These calculations were then corroborated by actual high signal to noise ratio (SNR) single cell recordings. We state that, for clinically relevant frequencies of 130 Hz (used for movement disorders) and realistic blanking windows of 2 ms, up to 27% of actual existing spikes are lost. We strongly advice cautioned use of the blanking method when spike rate quantification is attempted. Impact statement Blanking (artifact removal by temporarily grounding input), depending on recording parameters, can lead to significant spike loss. Very careful use of blanking circuits is advised. PMID:29780301
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
NASA Astrophysics Data System (ADS)
Sakimoto, S. E. H.
2016-12-01
Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.
Firing rate of noisy integrate-and-fire neurons with synaptic current dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrieux, David; Monnai, Takaaki; Department of Applied Physics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555
2009-08-15
We derive analytical formulas for the firing rate of integrate-and-fire neurons endowed with realistic synaptic dynamics. In particular, we include the possibility of multiple synaptic inputs as well as the effect of an absolute refractory period into the description. The latter affects the firing rate through its interaction with the synaptic dynamics.
Robust global identifiability theory using potentials--Application to compartmental models.
Wongvanich, N; Hann, C E; Sirisena, H R
2015-04-01
This paper presents a global practical identifiability theory for analyzing and identifying linear and nonlinear compartmental models. The compartmental system is prolonged onto the potential jet space to formulate a set of input-output equations that are integrals in terms of the measured data, which allows for robust identification of parameters without requiring any simulation of the model differential equations. Two classes of linear and non-linear compartmental models are considered. The theory is first applied to analyze the linear nitrous oxide (N2O) uptake model. The fitting accuracy of the identified models from differential jet space and potential jet space identifiability theories is compared with a realistic noise level of 3% which is derived from sensor noise data in the literature. The potential jet space approach gave a match that was well within the coefficient of variation. The differential jet space formulation was unstable and not suitable for parameter identification. The proposed theory is then applied to a nonlinear immunological model for mastitis in cows. In addition, the model formulation is extended to include an iterative method which allows initial conditions to be accurately identified. With up to 10% noise, the potential jet space theory predicts the normalized population concentration infected with pathogens, to within 9% of the true curve. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rawal, Amit; Rao, P. V. Kameswara; Kumar, Vijay
2018-04-01
Absorptive glass mat (AGM) separator is a vital technical component in valve regulated lead acid (VRLA) batteries that can be tailored for a desired application. To selectively design and tailor the AGM separator, the intricate three-dimensional (3D) structure needs to be unraveled. Herein, a toolkit of 3D analytical models of pore size distribution and electrolyte uptake expressed via wicking characteristics of AGM separators under unconfined and confined states is presented. 3D data of fiber orientation distributions obtained previously through X-ray micro-computed tomography (microCT) analysis are used as key set of input parameters. The predictive ability of pore size distribution model is assessed through the commonly used experimental set-up that usually apply high level of compressive stresses. Further, the existing analytical model of wicking characteristics of AGM separators has been extended to account for 3D characteristics, and subsequently, compared with the experimental results. A good agreement between the theory and experiments pave the way to simulate the realistic charge-discharge modes of the battery by applying cyclic loading condition. A threshold criterion describing the invariant behavior of pore size and wicking characteristics in terms of maximum permissible limit of key structural parameters during charge-discharge mode of the battery has also been proposed.
Responses of two nonlinear microbial models to warming and increased carbon input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y. P.; Jiang, J.; Chen-Charpentier, Benito
A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. In this paper, we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis–Menten kinetics (model A) and the other on regular Michaelis–Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in theirmore » initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO 2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO 2 efflux (F max) decreases with an increase in soil temperature in both models. However, the sensitivity of F max to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. Lastly, these insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.« less
NASA Astrophysics Data System (ADS)
Romanelli, Fabio; Vaccari, Franco; Altin, Giorgio; Panza, Giuliano
2016-04-01
The procedure we developed, and applied to a few relevant cases, leads to the seismic verification of a building by: a) use of a scenario based neodeterministic approach (NDSHA) for the calculation of the seismic input, and b) control of the numerical modeling of an existing building, using free vibration measurements of the real structure. The key point of this approach is the strict collaboration, from the seismic input definition to the monitoring of the response of the building in the calculation phase, of the seismologist and the civil engineer. The vibrometry study allows the engineer to adjust the computational model in the direction suggested by the experimental result of a physical measurement. Once the model has been calibrated by vibrometric analysis, one can select in the design spectrum the proper range of periods of interest for the structure. Then, the realistic values of spectral acceleration, which include the appropriate amplification obtained through the modeling of a "scenario" input to be applied to the final model, can be selected. Generally, but not necessarily, the "scenario" spectra lead to higher accelerations than those deduced by taking the spectra from the national codes (i.e. NTC 2008, for Italy). The task of the verifier engineer is to act so that the solution of the verification is conservative and realistic. We show some examples of the application of the procedure to some relevant (e.g. schools) buildings of the Trieste Province. The adoption of the scenario input has given in most of the cases an increase of critical elements that have to be taken into account in the design of reinforcements. However, the higher cost associated with the increase of elements to reinforce is reasonable, especially considering the important reduction of the risk level.
Responses of two nonlinear microbial models to warming and increased carbon input
Wang, Y. P.; Jiang, J.; Chen-Charpentier, Benito; ...
2016-02-18
A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. In this paper, we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis–Menten kinetics (model A) and the other on regular Michaelis–Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in theirmore » initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO 2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO 2 efflux (F max) decreases with an increase in soil temperature in both models. However, the sensitivity of F max to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. Lastly, these insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.« less
Kim, Hojeong; Heckman, C. J.
2014-01-01
Neuromodulatory inputs from brainstem systems modulate the normal function of spinal motoneurons by altering the activation properties of persistent inward currents (PICs) in their dendrites. However, the effect of the PIC on firing outputs also depends on its location in the dendritic tree. To investigate the interaction between PIC neuromodulation and PIC location dependence, we used a two-compartment model that was biologically realistic in that it retains directional and frequency-dependent electrical coupling between the soma and the dendrites, as seen in multi-compartment models based on full anatomical reconstructions of motoneurons. Our two-compartment approach allowed us to systematically vary the coupling parameters between the soma and the dendrite to accurately reproduce the effect of location of the dendritic PIC on the generation of nonlinear (hysteretic) motoneuron firing patterns. Our results show that as a single parameter value for PIC activation was either increased or decreased by 20% from its default value, the solution space of the coupling parameter values for nonlinear firing outputs was drastically reduced by approximately 80%. As a result, the model tended to fire only in a linear mode at the majority of dendritic PIC sites. The same results were obtained when all parameters for the PIC activation simultaneously changed only by approximately ±10%. Our results suggest the democratization effect of neuromodulation: the neuromodulation by the brainstem systems may play a role in switching the motoneurons with PICs at different dendritic locations to a similar mode of firing by reducing the effect of the dendritic location of PICs on the firing behavior. PMID:25309410
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
Patyk, Kelly A; Helm, Julie; Martin, Michael K; Forde-Folle, Kimberly N; Olea-Popelka, Francisco J; Hokanson, John E; Fingerlin, Tasha; Reeves, Aaron
2013-07-01
Epidemiologic simulation modeling of highly pathogenic avian influenza (HPAI) outbreaks provides a useful conceptual framework with which to estimate the consequences of HPAI outbreaks and to evaluate disease control strategies. The purposes of this study were to establish detailed and informed input parameters for an epidemiologic simulation model of the H5N1 strain of HPAI among commercial and backyard poultry in the state of South Carolina in the United States using a highly realistic representation of this poultry population; to estimate the consequences of an outbreak of HPAI in this population with a model constructed from these parameters; and to briefly evaluate the sensitivity of model outcomes to several parameters. Parameters describing disease state durations; disease transmission via direct contact, indirect contact, and local-area spread; and disease detection, surveillance, and control were established through consultation with subject matter experts, a review of the current literature, and the use of several computational tools. The stochastic model constructed from these parameters produced simulated outbreaks ranging from 2 to 111 days in duration (median 25 days), during which 1 to 514 flocks were infected (median 28 flocks). Model results were particularly sensitive to the rate of indirect contact that occurs among flocks. The baseline model established in this study can be used in the future to evaluate various control strategies, as a tool for emergency preparedness and response planning, and to assess the costs associated with disease control and the economic consequences of a disease outbreak. Published by Elsevier B.V.
Managing geological uncertainty in CO2-EOR reservoir assessments
NASA Astrophysics Data System (ADS)
Welkenhuysen, Kris; Piessens, Kris
2014-05-01
Recently the European Parliament has agreed that an atlas for the storage potential of CO2 is of high importance to have a successful commercial introduction of CCS (CO2 capture and geological storage) technology in Europe. CO2-enhanced oil recovery (CO2-EOR) is often proposed as a promising business case for CCS, and likely has a high potential in the North Sea region. Traditional economic assessments for CO2-EOR largely neglect the geological reality of reservoir uncertainties because these are difficult to introduce realistically in such calculations. There is indeed a gap between the outcome of a reservoir simulation and the input values for e.g. cost-benefit evaluations, especially where it concerns uncertainty. The approach outlined here is to turn the procedure around, and to start from which geological data is typically (or minimally) requested for an economic assessment. Thereafter it is evaluated how this data can realistically be provided by geologists and reservoir engineers. For the storage of CO2 these parameters are total and yearly CO2 injection capacity, and containment or potential on leakage. Specifically for the EOR operation, two additional parameters can be defined: the EOR ratio, or the ratio of recovered oil over injected CO2, and the CO2 recycling ratio of CO2 that is reproduced after breakthrough at the production well. A critical but typically estimated parameter for CO2-EOR projects is the EOR ratio, taken in this brief outline as an example. The EOR ratio depends mainly on local geology (e.g. injection per well), field design (e.g. number of wells), and time. Costs related to engineering can be estimated fairly good, given some uncertainty range. The problem is usually to reliably estimate the geological parameters that define the EOR ratio. Reliable data is only available from (onshore) CO2-EOR projects in the US. Published studies for the North Sea generally refer to these data in a simplified form, without uncertainty ranges, and are therefore not suited for cost-benefit analysis. They likely result in too optimistic results because onshore configurations are cheaper and different. We propose to translate the detailed US data to the North Sea, retaining their uncertainty ranges. In a first step, a general cost correction can be applied to account for costs specific to the EU and the offshore setting. In a second step site-specific data, including laboratory tests and reservoir modelling, are used to further adapt the EOR ratio values taking into account all available geological reservoir-specific knowledge. And lastly, an evaluation of the field configuration will have an influence on both the cost and local geology dimension, because e.g. horizontal drilling is needed (cost) to improve injectivity (geology). As such, a dataset of the EOR field is obtained which contains all aspects and their uncertainty ranges. With these, a geologically realistic basis is obtained for further cost-benefit analysis of a specific field, where the uncertainties are accounted for using a stochastic evaluation. Such ad-hoc evaluation of geological parameters will provide a better assessment of the CO2-EOR potential of the North Sea oil fields.
Empirical and numerical investigation of mass movements - data fusion and analysis
NASA Astrophysics Data System (ADS)
Schmalz, Thilo; Eichhorn, Andreas; Buhl, Volker; Tinkhof, Kurt Mair Am; Preh, Alexander; Tentschert, Ewald-Hans; Zangerl, Christian
2010-05-01
Increasing settlement activities of people in mountanious regions and the appearance of extreme climatic conditions motivate the investigation of landslides. Within the last few years a significant rising of disastrous slides could be registered which generated a broad public interest and the request for security measures. The FWF (Austrian Science Fund) funded project ‘KASIP' (Knowledge-based Alarm System with Identified Deformation Predictor) deals with the development of a new type of alarm system based on calibrated numerical slope models for the realistic calculation of failure scenarios. In KASIP, calibration is the optimal adaptation of a numerical model to available monitoring data by least-squares techniques (e.g. adaptive Kalman-filtering). Adaptation means the determination of a priori uncertain physical parameters like the strength of the geological structure. The object of our studies in KASIP is the landslide ‘Steinlehnen' near Innsbruck (Northern Tyrol, Austria). The first part of the presentation is focussed on the determination of geometrical surface-information. This also includes the description of the monitoring system for the collection of the displacement data and filter approaches for the estimation of the slopes kinematic behaviour. The necessity of continous monitoring and the effect of data gaps for reliable filter results and the prediction of the future state is discussed. The second part of the presentation is more focussed on the numerical modelling of the slope by FD- (Finite Difference-) methods and the development of the adaptive Kalman-filter. The realisation of the numerical slope model is developed by FLAC3D (software company HCItasca Ltd.). The model contains different geomechanical approaches (like Mohr-Coulomb) and enables the calculation of great deformations and the failure of the slope. Stability parameters (like the factor-of-safety FS) allow the evaluation of the current state of the slope. Until now, the adaptation of relevant material parameters is often performed by trial and error methods. This common method shall be improved by adaptive Kalman-filtering methods. In contrast to trial and error, Kalman-filtering also considers stochastical information of the input data. Especially the estimation of strength parameters (cohesion c, angle of internal friction phi) in a dynamic consideration of the slope is discussed. Problems with conditioning and numerical stability of the filter matrices, memory overflow and computing time are outlined. It is shown that the Kalman-filter is in principle suitable for an semi-automated adaptation process and obtains realistic values for the unknown material parameters.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Bilinearity in Spatiotemporal Integration of Synaptic Inputs
Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David
2014-01-01
Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832
The use of cowl camber and taper to reduce rotor/stator interaction noise
NASA Technical Reports Server (NTRS)
Martinez, R.
1995-01-01
The project had two specific technical objectives: (1) to develop a realistic three-dimensional model of tonal noise due to rotor/stator interaction, as the input field for predictions of diffraction and dissipation by a lined cowl; and (2) to determine whether the generator curve of that cowl, or duct, could be 'steered' to yield substantially lower values of propulsor noise along the engine's fore and aft open sectors. The more general and important aim of their research is to provide the commercial aircraft industry with a useful predictive tool to help it meet its noise-reduction goals. The work has produced a tractable and yet realistic model of rotor/stator interaction noise. The blades in the fan stage are radially divergent, twisted, and of realistically wide chords to match the high frequencies and speeds of the sound-production process. The resulting three-dimensional acoustic nearfield insonifies the interior wall of the diffracting cowl, whose shape, incidentally, does not affect fore or aft noise significantly (but other factors do).
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Bioadsorber efficiency, design, and performance forecasting for alachlor removal.
Badriyha, Badri N; Ravindran, Varadarajan; Den, Walter; Pirbazari, Massoud
2003-10-01
This study discusses a mathematical modeling and design protocol for bioactive granular activated carbon (GAC) adsorbers employed for purification of drinking water contaminated by chlorinated pesticides, exemplified by alachlor. A thin biofilm model is discussed that incorporates the following phenomenological aspects: film transfer from the bulk fluid to the adsorbent particles, diffusion through the biofilm immobilized on adsorbent surface, adsorption of the contaminant into the adsorbent particle. The modeling approach involved independent laboratory-scale experiments to determine the model input parameters. These experiments included adsorption isotherm studies, adsorption rate studies, and biokinetic studies. Bioactive expanded-bed adsorber experiments were conducted to obtain realistic experimental data for determining the ability of the model for predicting adsorber dynamics under different operating conditions. The model equations were solved using a computationally efficient hybrid numerical technique combining orthogonal collocation and finite difference methods. The model provided accurate predictions of adsorber dynamics for bioactive and non-bioactive scenarios. Sensitivity analyses demonstrated the significance of various model parameters, and focussed on enhancement in certain key parameters to improve the overall process efficiency. Scale-up simulation studies for bioactive and non-bioactive adsorbers provided comparisons between their performances, and illustrated the advantages of bioregeneration for enhancing their effective service life spans. Isolation of microbial species revealed that fungal strains were more efficient than bacterial strains in metabolizing alachlor. Microbial degradation pathways for alachlor were proposed and confirmed by the detection of biotransformation metabolites and byproducts using gas chromatography/mass spectrometry.
Particle Dark Matter constraints: the effect of Galactic uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito, Maria; Bernal, Nicolás; Iocco, Fabio
2017-02-01
Collider, space, and Earth based experiments are now able to probe several extensions of the Standard Model of particle physics which provide viable dark matter candidates. Direct and indirect dark matter searches rely on inputs of astrophysical nature, such as the local dark matter density or the shape of the dark matter density profile in the target in object. The determination of these quantities is highly affected by astrophysical uncertainties. The latter, especially those for our own Galaxy, are ill-known, and often not fully accounted for when analyzing the phenomenology of particle physics models. In this paper we present amore » systematic, quantitative estimate of how astrophysical uncertainties on Galactic quantities (such as the local galactocentric distance, circular velocity, or the morphology of the stellar disk and bulge) propagate to the determination of the phenomenology of particle physics models, thus eventually affecting the determination of new physics parameters. We present results in the context of two specific extensions of the Standard Model (the Singlet Scalar and the Inert Doublet) that we adopt as case studies for their simplicity in illustrating the magnitude and impact of such uncertainties on the parameter space of the particle physics model itself. Our findings point toward very relevant effects of current Galactic uncertainties on the determination of particle physics parameters, and urge a systematic estimate of such uncertainties in more complex scenarios, in order to achieve constraints on the determination of new physics that realistically include all known uncertainties.« less
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Double β-decay nuclear matrix elements for the A=48 and A=58 systems
NASA Astrophysics Data System (ADS)
Skouras, L. D.; Vergados, J. D.
1983-11-01
The nuclear matrix elements entering the double β decays of the 48Ca-48Ti and 58Ni-58Fe systems have been calculated using a realistic two nucleon interaction and realistic shell model spaces. Effective transition operators corresponding to a variety of gauge theory models have been considered. The stability of such matrix elements against variations of the nuclear parameters is examined. Appropriate lepton violating parameters are extracted from the A=48 data and predictions are made for the lifetimes of the positron decays of the A=58 system. RADIOACTIVITY Double β decay. Gauge theories. Lepton nonconservation. Neutrino mass. Shell model calculations.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
Sparse Reconstruction of Electric Fields from Radial Magnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeates, Anthony R.
2017-02-10
Accurate estimates of the horizontal electric field on the Sun’s visible surface are important not only for estimating the Poynting flux of magnetic energy into the corona but also for driving time-dependent magnetohydrodynamic models of the corona. In this paper, a method is developed for estimating the horizontal electric field from a sequence of radial-component magnetic field maps. This problem of inverting Faraday’s law has no unique solution. Unfortunately, the simplest solution (a divergence-free electric field) is not realistically localized in regions of nonzero magnetic field, as would be expected from Ohm’s law. Our new method generates instead a localizedmore » solution, using a basis pursuit algorithm to find a sparse solution for the electric field. The method is shown to perform well on test cases where the input magnetic maps are flux balanced in both Cartesian and spherical geometries. However, we show that if the input maps have a significant imbalance of flux—usually arising from data assimilation—then it is not possible to find a localized, realistic, electric field solution. This is the main obstacle to driving coronal models from time sequences of solar surface magnetic maps.« less
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, Jason; Winkler, Jon
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
Ormand, W. E.; Brown, B. A.; Hjorth-Jensen, M.
2017-08-01
We present calculations for the c coefficients of the isobaric mass multiplet equation for nuclei from A = 42 to A = 54 based on input from three realistic nucleon-nucleon interactions. We demonstrate that there is a clear dependence on the short-range charge-symmetry-breaking (CSB) part of the strong interaction and that there is significant disagreement in the CSB part between the commonly used CD-Bonn, chiral effective field theory at next-to-next-to-next-to-leading-order, and Argonne V18 nucleon-nucleon interactions. In addition, we show that all three interactions give a CSB contribution to the c coefficient that is too large when compared to experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ormand, W. E.; Brown, B. A.; Hjorth-Jensen, M.
We present calculations for the c coefficients of the isobaric mass multiplet equation for nuclei from A = 42 to A = 54 based on input from three realistic nucleon-nucleon interactions. We demonstrate that there is a clear dependence on the short-range charge-symmetry-breaking (CSB) part of the strong interaction and that there is significant disagreement in the CSB part between the commonly used CD-Bonn, chiral effective field theory at next-to-next-to-next-to-leading-order, and Argonne V18 nucleon-nucleon interactions. In addition, we show that all three interactions give a CSB contribution to the c coefficient that is too large when compared to experiment.
Automated forward mechanical modeling of wrinkle ridges on Mars
NASA Astrophysics Data System (ADS)
Nahm, Amanda; Peterson, Samuel
2016-04-01
One of the main goals of the InSight mission to Mars is to understand the internal structure of Mars [1], in part through passive seismology. Understanding the shallow surface structure of the landing site is critical to the robust interpretation of recorded seismic signals. Faults, such as the wrinkle ridges abundant in the proposed landing site in Elysium Planitia, can be used to determine the subsurface structure of the regions they deform. Here, we test a new automated method for modeling of the topography of a wrinkle ridge (WR) in Elysium Planitia, allowing for faster and more robust determination of subsurface fault geometry for interpretation of the local subsurface structure. We perform forward mechanical modeling of fault-related topography [e.g., 2, 3], utilizing the modeling program Coulomb [4, 5] to model surface displacements surface induced by blind thrust faulting. Fault lengths are difficult to determine for WR; we initially assume a fault length of 30 km, but also test the effects of different fault lengths on model results. At present, we model the wrinkle ridge as a single blind thrust fault with a constant fault dip, though WR are likely to have more complicated fault geometry [e.g., 6-8]. Typically, the modeling is performed using the Coulomb GUI. This approach can be time consuming, requiring user inputs to change model parameters and to calculate the associated displacements for each model, which limits the number of models and parameter space that can be tested. To reduce active user computation time, we have developed a method in which the Coulomb GUI is bypassed. The general modeling procedure remains unchanged, and a set of input files is generated before modeling with ranges of pre-defined parameter values. The displacement calculations are divided into two suites. For Suite 1, a total of 3770 input files were generated in which the fault displacement (D), dip angle (δ), depth to upper fault tip (t), and depth to lower fault tip (B) were varied. A second set of input files was created (Suite 2) after the best-fit model from Suite 1 was determined, in which fault parameters were varied with a smaller range and incremental changes, resulting in a total of 28,080 input files. RMS values were calculated for each Coulomb model. RMS values for Suite 1 models were calculated over the entire profile and for a restricted x range; the latter shows a reduced RMS misfit by 1.2 m. The minimum RMS value for Suite 2 models decreases again by 0.2 m, resulting in an overall reduction of the RMS value of ~1.4 m (18%). Models with different fault lengths (15, 30, and 60 km) are visually indistinguishable. Values for δ, t, B, and RMS misfit are either the same or very similar for each best fit model. These results indicate that the subsurface structure can be reliably determined from forward mechanical modeling even with uncertainty in fault length. Future work will test this method with the more realistic WR fault geometry. References: [1] Banerdt et al. (2013), 44th LPSC, #1915. [2] Cohen (1999), Adv. Geophys., 41, 133-231. [3] Schultz and Lin (2001), JGR, 106, 16549-16566. [4] Lin and Stein (2004), JGR, 109, B02303, doi:10.1029/2003JB002607. [5] Toda et al. (2005), JGR, 103, 24543-24565. [6] Okubo and Schultz (2004), GSAB, 116, 597-605. [7] Watters (2004), Icarus, 171, 284-294. [8] Schultz (2000), JGR, 105, 12035-12052.
NASA Astrophysics Data System (ADS)
Baig, Anisullah; Gamzina, Diana; Barchfeld, Robert; Domier, Calvin; Barnett, Larry R.; Luhmann, Neville C.
2012-09-01
In this paper, we describe micro-fabrication, RF measurements, and particle-in-cell (PIC) simulation modeling analysis of the 0.22 THz double-vane half period staggered traveling wave tube amplifier (TWTA) circuit. The TWTA slow wave structure comprised of two sections separated by two sever ports loaded by loss material, with integrated broadband input/output couplers. The micro-metallic structures were fabricated using nano-CNC milling and diffusion bonded in a three layer process. The 3D optical microscopy and SEM analysis showed that the fabrication error was within 2-3 μm and surface roughness was measured within 30-50 nm. The RF measurements were conducted with an Agilent PNA-X network analyzer employing WR5.1 T/R modules with a frequency range of 178-228 GHz. The in-band insertion loss (S21) for both the short section and long section (separated by a sever) was measured as ˜-5 dB while the return loss was generally around ˜-15 dB or better. The measurements matched well with the S-matrix simulation analysis that predicted a 3 dB bandwidth of ˜45 GHz with an operating frequency at 220 GHz. However, the measured S21 was ˜3 dB less than the design values, and is attributed to surface roughness and alignment issues. The confirmation measurements were conducted over the full frequency band up to 270 GHz employing a backward wave oscillator (BWO) scalar network analyzer setup employing a BWO in the frequency range 190 GHz-270 GHz. PIC simulations were conducted for the realistic TWT output power performance analysis with incorporation of corner radius of 127 μm, which is inevitably induced by nano-machining. Furthermore, the S21 value in both sections of the TWT structure was reduced to correspond to the measurements by using a degraded conductivity of 10% International Annealed Copper Standard. At 220 GHz, for an elliptic sheet electron beam of 20 kV and 0.25 A, the average output power of the tube was predicted to be reduced from 90 W (for ideal conductivity/design S-parameters) to 70 W (for the measured S-parameters/inferred conductivity) for an average input power of 50 mW. The gain of the tube remains reasonable: ˜31.4 dB with an electronic efficiency of ˜1.4%. The same analysis was also conducted for several frequencies between 190 GHz-260 GHz. This detailed realistic PIC analysis demonstrated that this nano-machined TWT circuit has slightly reduced S-parameters and output power from design, but within an acceptable range and still have promising output power, gain, and band width as required. Thus, we expect to meet the specifications of 1000 W-GHz for the darpa program goals.
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
ERIC Educational Resources Information Center
Matthews-Lopez, Joy L.; Hombo, Catherine M.
The purpose of this study was to examine the recovery of item parameters in simulated Automatic Item Generation (AIG) conditions, using Markov chain Monte Carlo (MCMC) estimation methods to attempt to recover the generating distributions. To do this, variability in item and ability parameters was manipulated. Realistic AIG conditions were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Hove, W.; Van Laeken, K.; Bartsoen, L.
1995-09-01
To enable a more realistic and accurate calculation of the radiological consequences of a SGTR, a fission product transport model was developed. As the radiological releases strongly depend on the thermal-hydraulic transient, the model was included in the RELAP5 input decks of the Belgian NPPs. This enables the coupled calculation of the thermal-hydraulic transient and the radiological release. The fission product transport model tracks the concentration of the fission products in the primary circuit, in each of the SGs as well as in the condenser. This leads to a system of 6 coupled, first order ordinary differential equations with timemore » dependent coefficients. Flashing, scrubbing, atomisation and dry out of the break flow are accounted for. Coupling with the thermal-hydraulic calculation and correct modelling of the break position enables an accurate calculation of the mixture level above the break. Pre- and post-accident spiking in the primary circuit are introduced. The transport times in the FW-system and the SG blowdown system are also taken into account, as is the decontaminating effect of the primary make-up system and of the SG blowdown system. Physical input parameters such as the partition coefficients, half life times and spiking coefficients are explicitly introduced so that the same model can be used for iodine, caesium and noble gases.« less
Ascarrunz, F G; Kisley, M A; Flach, K A; Hamilton, R W; MacGregor, R J
1995-07-01
This paper applies a general mathematical system for characterizing and scaling functional connectivity and information flow across the diffuse (EC) and discrete (DG) input junctions to the CA3 hippocampus. Both gross connectivity and coordinated multiunit informational firing patterns are quantitatively characterized in terms of 32 defining parameters interrelated by 17 equations, and then scaled down according to rules for uniformly proportional scaling and for partial representation. The diffuse EC-CA3 junction is shown to be uniformly scalable with realistic representation of both essential spatiotemporal cooperativity and coordinated firing patterns down to populations of a few hundred neurons. Scaling of the discrete DG-CA3 junction can be effected with a two-step process, which necessarily deviates from uniform proportionality but nonetheless produces a valuable and readily interpretable reduced model, also utilizing a few hundred neurons in the receiving population. Partial representation produces a reduced model of only a portion of the full network where each model neuron corresponds directly to a biological neuron. The mathematical analysis illustrated here shows that although omissions and distortions are inescapable in such an application, satisfactorily complete and accurate models the size of pattern modules are possible. Finally, the mathematical characterization of these junctions generates a theory which sees the DG as a definer of the fine structure of embedded traces in the hippocampus and entire coordinated patterns of sequences of 14-cell links in CA3 as triggered by the firing of sequences of individual neurons in DG.
NASA Astrophysics Data System (ADS)
Glassmeier, F.; Arnold, L.; Lohmann, U.; Dietlicher, R.; Paukert, M.
2016-12-01
Our current understanding of charge generation in thunderclouds is based on collisional charge transfer between graupel and ice crystals in the presence of liquid water droplets as dominant mechanism. The physical process of charge transfer and the sign of net charge generated on graupel and ice crystals under different cloud conditions is not yet understood. The Relative-Diffusional-Growth-Rate (RDGR) theory (Baker et al. 1987) suggests that the particle with the faster diffusional radius growth is charged positively. In this contribution, we use simulations of idealized thunderclouds with two-moment warm and cold cloud microphysics to generate realistic combinations of RDGR-parameters. We find that these realistic parameter combinations result in a relationship between sign of charge, cloud temperature and effective water content that deviates from previous theoretical and laboratory studies. This deviation indicates that the RDGR theory is sensitive to correlations between parameters that occur in clouds but are not captured in studies that vary temperature and water content while keeping other parameters at fixed values. In addition, our results suggest that diffusional growth from the riming-related local water vapor field, a key component of the RDGR theory, is negligible for realistic parameter combinations. Nevertheless, we confirm that the RDGR theory results in positive or negative charging of particles under different cloud conditions. Under specific conditions, charge generation via the RDGR theory alone might thus be sufficient to explain tripolar charge structures in thunderclouds. In general, however, additional charge generation mechanisms and adaptations to the RDGR theory that consider riming other than via local vapor deposition seem necessary.
2016 Standard Scenarios Report: A U.S. Electricity Sector Outlook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Wesley; Mai, Trieu; Logan, Jeffrey
The National Renewable Energy Laboratory is conducting a study sponsored by the Office of Energy Efficiency and Renewable Energy (EERE) that aims to document and implement an annual process designed to identify a realistic and timely set of input assumptions (e.g., technology cost and performance, fuel costs), and a diverse set of potential futures (standard scenarios), initially for electric sector analysis.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
NASA Technical Reports Server (NTRS)
Szallasi, Zoltan; Liang, Shoudan
2000-01-01
In this paper we show how Boolean genetic networks could be used to address complex problems in cancer biology. First, we describe a general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner. Second, we introduce 'realistic Boolean genetic networks' that produce time series measurements very similar to those detected in actual biological systems. Third, we outline a series of essential questions related to cancer biology and cancer therapy that could be addressed by the use of 'realistic Boolean genetic network' modeling.
Effects of damping on mode shapes, volume 1
NASA Technical Reports Server (NTRS)
Gates, R. M.
1977-01-01
Displacement, velocity, and acceleration admittances were calculated for a realistic NASTRAN structural model of space shuttle for three conditions: liftoff, maximum dynamic pressure and end of solid rocket booster burn. The realistic model of the orbiter, external tank, and solid rocket motors included the representation of structural joint transmissibilities by finite stiffness and damping elements. Methods developed to incorporate structural joints and their damping characteristics into a finite element model of the space shuttle, to determine the point damping parameters required to produce realistic damping in the primary modes, and to calculate the effect of distributed damping on structural resonances through the calculation of admittances.
Siebert, Tobias; Leichsenring, Kay; Rode, Christian; Wick, Carolin; Stutzig, Norman; Schubert, Harald; Blickhan, Reinhard; Böl, Markus
2015-01-01
The vastly increasing number of neuro-muscular simulation studies (with increasing numbers of muscles used per simulation) is in sharp contrast to a narrow database of necessary muscle parameters. Simulation results depend heavily on rough parameter estimates often obtained by scaling of one muscle parameter set. However, in vivo muscles differ in their individual properties and architecture. Here we provide a comprehensive dataset of dynamic (n = 6 per muscle) and geometric (three-dimensional architecture, n = 3 per muscle) muscle properties of the rabbit calf muscles gastrocnemius, plantaris, and soleus. For completeness we provide the dynamic muscle properties for further important shank muscles (flexor digitorum longus, extensor digitorum longus, and tibialis anterior; n = 1 per muscle). Maximum shortening velocity (normalized to optimal fiber length) of the gastrocnemius is about twice that of soleus, while plantaris showed an intermediate value. The force-velocity relation is similar for gastrocnemius and plantaris but is much more bent for the soleus. Although the muscles vary greatly in their three-dimensional architecture their mean pennation angle and normalized force-length relationships are almost similar. Forces of the muscles were enhanced in the isometric phase following stretching and were depressed following shortening compared to the corresponding isometric forces. While the enhancement was independent of the ramp velocity, the depression was inversely related to the ramp velocity. The lowest effect strength for soleus supports the idea that these effects adapt to muscle function. The careful acquisition of typical dynamical parameters (e.g. force-length and force-velocity relations, force elongation relations of passive components), enhancement and depression effects, and 3D muscle architecture of calf muscles provides valuable comprehensive datasets for e.g. simulations with neuro-muscular models, development of more realistic muscle models, or simulation of muscle packages. PMID:26114955
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Mengistu, Zelalem
2016-12-01
In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.
Self-organized criticality occurs in non-conservative neuronal networks during `up' states
NASA Astrophysics Data System (ADS)
Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst
2010-10-01
During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standardmore » reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.« less
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
NASA Technical Reports Server (NTRS)
Sun, S. F.
1985-01-01
The Ground Hydrologic Model (GHM) developed for use in an atmospheric general circulation model (GCM) has been refined. A series of sensitivity studies of the new version of the GHM were conducted for the purpose of understanding the role played by various physical parameters in the GHM. The following refinements have been made: (1) the GHM is coupled directly with the planetary boundary layer (PBL); (2) a bulk vegetation layer is added with a more realistic large-scale parameterization; and (3) the infiltration rate is modified. This version GHM has been tested using input data derived from a GCM simulation run for eight North America regions for 45 days. The results are compared with those of the resident GHM in the GCM. The daily average of grid surface temperatures from both models agree reasonably well in phase and magnitude. However, large difference exists in one or two regions on some days. The daily average evapotranspiration is in general 10 to 30% less than the corresponding value given by the resident GHM.
Sensitivities of seismic velocities to temperature, pressure and composition in the lower mantle
NASA Astrophysics Data System (ADS)
Trampert, Jeannot; Vacher, Pierre; Vlaar, Nico
2001-08-01
We calculated temperature, pressure and compositional sensitivities of seismic velocities in the lower mantle using latest mineral physics data. The compositional variable refers to the volume proportion of perovskite in a simplified perovskite-magnesiowüstite mantle assemblage. The novelty of our approach is the exploration of a reasonable range of input parameters which enter the lower mantle extrapolations. This leads to realistic error bars on the sensitivities. Temperature variations can be inferred throughout the lower mantle within a good degree of precision. Contrary to the uppermost mantle, modest compositional changes in the lower mantle can be detected by seismic tomography, with a larger uncertainty though. A likely trade-off between temperature and composition will be largely determined by uncertainties in tomography itself. Given current sources of uncertainties on recent data, anelastic contributions to the temperature sensitivities (calculated using Karato's approach) appear less significant than previously thought. Recent seismological determinations of the ratio of relative S to P velocity heterogeneity can be entirely explain by thermal effects, although isolated spots beneath Africa and the Central Pacific in the lowermost mantle may ask for a compositional origin.
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1982-01-01
A fast computer program, GRID3C, was developed to generate multilevel three dimensional, C type, periodic, boundary conforming grids for the calculation of realistic turbomachinery and propeller flow fields. The technique is based on two analytic functions that conformally map a cascade of semi-infinite slits to a cascade of doubly infinite strips on different Riemann sheets. Up to four consecutively refined three dimensional grids are automatically generated and permanently stored on four different computer tapes. Grid nonorthogonality is introduced by a separate coordinate shearing and stretching performed in each of three coordinate directions. The grids are easily clustered closer to the blade surface, the trailing and leading edges and the hub or shroud regions by changing appropriate input parameters. Hub and duct (or outer free boundary) have different axisymmetric shapes. A vortex sheet of arbitrary thickness emanating smoothly from the blade trailing edge is generated automatically by GRID3C. Blade cross sectional shape, chord length, twist angle, sweep angle, and dihedral angle can vary in an arbitrary smooth fashion in the spanwise direction.
Self-organized criticality occurs in non-conservative neuronal networks during Up states
Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst
2010-01-01
During sleep, under anesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between Up and Down states (UDS) characterized by distinct membrane potentials and spike rates [1, 2, 3, 4, 5]. Another phenomenon observed in preparations similar to those that exhibit UDS, such as anesthetized rats [6], brain slices and cultures devoid of sensory input [7], as well as awake monkey cortex [8] is self-organized criticality (SOC). This is characterized by activity “avalanches” whose size distributions obey a power law with critical exponent of about −32 and branching parameter near unity. Recent work has demonstrated SOC in conservative neuronal network models [9, 10], however critical behavior breaks down when biologically realistic non-conservatism is introduced [9]. We here report robust SOC behavior in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have 2 stable activity levels corresponding to Up and Down states, that the networks switch spontaneously between them, and that Up states are critical and Down states are subcritical. PMID:21804861
Lamp pumped Nd:YAG laser. Space-qualifiable Nd:YAG laser for optical communications
NASA Technical Reports Server (NTRS)
Ward, K. B.
1973-01-01
Results are given of a program concerned with the design, fabrication, and evaluation of alkali pump lamps for eventual use in a space qualified Nd:YAG laser system. The study included evaluation of 2mm through 6mm bore devices. Primary emphasis was placed upon the optimization of the 4mm bore lamp and later on the 6mm bore lamp. As part of this effort, reference was made to the Sylvania work concerned with the theoretical modeling of the Nd:YAG laser. With the knowledge gained, a projection of laser performance was made based upon realistic lamp parameters which should easily be achieved during following developmental efforts. Measurements were made on the lamp performance both in and out of the cavity configuration. One significant observation was that for a constant vapor pressure device, the spectral and fluorescent output did not vary for vacuum or argon environment. Therefore, the laser can be operated in an inert environment (eg. argon) with no degradation in output. Laser output of 3.26 watts at 430 watts input was obtained for an optimized 4mm bore lamp.
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.
2015-03-01
Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Shift-variant linear system modeling for multispectral scanners
NASA Astrophysics Data System (ADS)
Amini, Abolfazl M.; Ioup, George E.; Ioup, Juliette W.
1995-07-01
Multispectral scanner data are affected both by the spatial impulse response of the sensor and the spectral response of each channel. To achieve a realistic representation for the output data for a given scene spectral input, both of these effects must be incorporated into a forward model. Each channel can have a different spatial response and each has its characteristic spectral response. A forward model is built which includes the shift invariant spatial broadening of the input for the channels and the shift variant spectral response across channels. The model is applied to the calibrated airborne multispectral scanner as well as the airborne terrestrial applications sensor developed at NASA Stennis Space Center.
Control design methods for floating wind turbines for optimal disturbance rejection
NASA Astrophysics Data System (ADS)
Lemmer, Frank; Schlipf, David; Cheng, Po Wen
2016-09-01
An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.
Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook
1980-04-01
82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Wind effect on salt transport variability in the Bay of Bengal
NASA Astrophysics Data System (ADS)
Sandeep, K. K.; Pant, V.
2017-12-01
The Bay of Bengal (BoB) exhibits large spatial variability in sea surface salinity (SSS) pattern caused by its unique hydrological, meteorological and oceanographical characteristics. This SSS variability is largely controlled by the seasonally reversing monsoon winds and the associated currents. Further, the BoB receives substantial freshwater inputs through excess precipitation over evaporation and river discharge. Rivers like Ganges, Brahmaputra, Mahanadi, Krishna, Godavari, and Irawwady discharge annually a freshwater volume in range between 1.5 x 1012 and 1.83 x 1013 m3 into the bay. A major volume of this freshwater input to the bay occurs during the southwest monsoon (June-September) period. In the present study, a relative role of winds in the SSS variability in the bay is investigated by using an eddy-resolving three dimensional Regional Ocean Modeling System (ROMS) numerical model. The model is configured with realistic bathymetry, coastline of study region and forced with daily climatology of atmospheric variables. River discharges from the major rivers are distributed in the model grid points representing their respective geographic locations. Salt transport estimate from the model simulation for realistic case are compared with the standard reference datasets. Further, different experiments were carried out with idealized surface wind forcing representing the normal, low, high, and very high wind speed conditions in the bay while retaining the realistic daily varying directions for all the cases. The experimental simulations exhibit distinct dispersal patterns of the freshwater plume and SSS in different experiments in response to the idealized winds. Comparison of the meridional and zonal surface salt transport estimated for each experiment showed strong seasonality with varying magnitude in the bay with a maximum spatial and temporal variability in the western and northern parts of the BoB.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
Bernard, Pierre-Yves; Benoît, Marc; Roger-Estrade, Jean; Plantureux, Sylvain
2016-12-01
The objectives of this comparison of two biophysical models of nitrogen losses were to evaluate first whether results were similar and second whether both were equally practical for use by non-scientist users. Results were obtained with the crop model STICS and the environmental model AGRIFLUX based on nitrogen loss simulations across a small groundwater catchment area (<1 km(2)) located in the Lorraine region in France. Both models simulate the influences of leaching and cropping systems on nitrogen losses in a relevant manner. The authors conclude that limiting the simulations to areas where soils with a greater risk of leaching cover a significant spatial extent would likely yield acceptable results because those soils have more predictable leaching of nitrogen. In addition, the choice of an environmental model such as AGRIFLUX which requires fewer parameters and input variables seems more user-friendly for agro-environmental assessment. The authors then discuss additional challenges for non-scientists such as lack of parameter optimization, which is essential to accurately assessing nitrogen fluxes and indirectly not to limit the diversity of uses of simulated results. Despite current restrictions, with some improvement, biophysical models could become useful environmental assessment tools for non-scientists. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computational Difficulties in the Identification and Optimization of Control Systems.
1980-01-01
necessary and Identify by block number) - -. 3. iABSTRACT (Continue on revers, side It necessary and Identify by block number) As more realistic models ...Island 02912 ABSTRACT As more realistic models for resource management are developed, the need for efficient computational techniques for parameter...optimization (optimal control) in "state" models which This research was supported in part by ttfe National Science Foundation under grant NSF-MCS 79-05774
The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades
NASA Astrophysics Data System (ADS)
Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.
2018-04-01
This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
2017-05-01
ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity
Patel, Mainak
2018-01-15
The spiking of barrel regular-spiking (RS) cells is tuned for both whisker deflection direction and velocity. Velocity tuning arises due to thalamocortical (TC) synchrony (but not spike quantity) varying with deflection velocity, coupled with feedforward inhibition, while direction selectivity is not fully understood, though may be due partly to direction tuning of TC spiking. Data show that as deflection direction deviates from the preferred direction of an RS cell, excitatory input to the RS cell diminishes minimally, but temporally shifts to coincide with the time-lagged inhibitory input. This work constructs a realistic large-scale model of a barrel; model RS cells exhibit velocity and direction selectivity due to TC input dynamics, with the experimentally observed sharpening of direction tuning with decreasing velocity. The model puts forth the novel proposal that RS→RS synapses can naturally and simply account for the unexplained direction dependence of RS cell inputs - as deflection direction deviates from the preferred direction of an RS cell, and TC input declines, RS→RS synaptic transmission buffers the decline in total excitatory input and causes a shift in timing of the excitatory input peak from the peak in TC input to the delayed peak in RS input. The model also provides several experimentally testable predictions on the velocity dependence of RS cell inputs. This model is the first, to my knowledge, to study the interaction of direction and velocity and propose physiological mechanisms for the stimulus dependence in the timing and amplitude of RS cell inputs. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
NASA Astrophysics Data System (ADS)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Hua; Ho, Luis C.
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxymore » Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.« less
Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia
2018-05-17
The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Aero/structural tailoring of engine blades (AERO/STAEBL)
NASA Technical Reports Server (NTRS)
Brown, K. W.
1988-01-01
This report describes the Aero/Structural Tailoring of Engine Blades (AERO/STAEBL) program, which is a computer code used to perform engine fan and compressor blade aero/structural numerical optimizations. These optimizations seek a blade design of minimum operating cost that satisfies realistic blade design constraints. This report documents the overall program (i.e., input, optimization procedures, approximate analyses) and also provides a detailed description of the validation test cases.
Wiesmeier, Martin; Poeplau, Christopher; Sierra, Carlos A.; Maier, Harald; Frühauf, Cathleen; Hübner, Rico; Kühnel, Anna; Spörlein, Peter; Geuß, Uwe; Hangen, Edzard; Schilling, Bernd; von Lützow, Margit; Kögel-Knabner, Ingrid
2016-01-01
Climate change and stagnating crop yields may cause a decline of SOC stocks in agricultural soils leading to considerable CO2 emissions and reduced agricultural productivity. Regional model-based SOC projections are needed to evaluate these potential risks. In this study, we simulated the future SOC development in cropland and grassland soils of Bavaria in the 21st century. Soils from 51 study sites representing the most important soil classes of Central Europe were fractionated and derived SOC pools were used to initialize the RothC soil carbon model. For each site, long-term C inputs were determined using the C allocation method. Model runs were performed for three different C input scenarios as a realistic range of projected yield development. Our modelling approach revealed substantial SOC decreases of 11–16% under an expected mean temperature increase of 3.3 °C assuming unchanged C inputs. For the scenario of 20% reduced C inputs, agricultural SOC stocks are projected to decline by 19–24%. Remarkably, even the optimistic scenario of 20% increased C inputs led to SOC decreases of 3–8%. Projected SOC changes largely differed among investigated soil classes. Our results indicated that C inputs have to increase by 29% to maintain present SOC stocks in agricultural soils. PMID:27585648
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
NASA Astrophysics Data System (ADS)
Grova, C.; Jannin, P.; Biraben, A.; Buvat, I.; Benali, H.; Bernard, A. M.; Scarabin, J. M.; Gibaud, B.
2003-12-01
Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were found within the range of asymmetry coefficients measured on corresponding real data. The features of the proposed approach are compared with those of other methods previously described to obtain datasets appropriate for the assessment of fusion methods.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
Analysis of a decision model in the context of equilibrium pricing and order book pricing
NASA Astrophysics Data System (ADS)
Wagner, D. C.; Schmitt, T. A.; Schäfer, R.; Guhr, T.; Wolf, D. E.
2014-12-01
An agent-based model for financial markets has to incorporate two aspects: decision making and price formation. We introduce a simple decision model and consider its implications in two different pricing schemes. First, we study its parameter dependence within a supply-demand balance setting. We find realistic behavior in a wide parameter range. Second, we embed our decision model in an order book setting. Here, we observe interesting features which are not present in the equilibrium pricing scheme. In particular, we find a nontrivial behavior of the order book volumes which reminds of a trend switching phenomenon. Thus, the decision making model alone does not realistically represent the trading and the stylized facts. The order book mechanism is crucial.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Cao, Cong; Wang, Chuan; He, Ling-Yan; Zhang, Ru
2013-02-25
We investigate an atomic entanglement purification protocol based on the coherent state input-output process by working in low-Q cavity in the atom-cavity intermediate coupling region. The information of entangled states are encoded in three-level configured single atoms confined in separated one-side optical micro-cavities. Using the coherent state input-output process, we design a two-qubit parity check module (PCM), which allows the quantum nondemolition measurement for the atomic qubits, and show its use for remote parities to distill a high-fidelity atomic entangled ensemble from an initial mixed state ensemble nonlocally. The proposed scheme can further be used for unknown atomic states entanglement concentration. Also by exploiting the PCM, we describe a modified scheme for atomic entanglement concentration by introducing ancillary single atoms. As the coherent state input-output process is robust and scalable in realistic applications, and the detection in the PCM is based on the intensity of outgoing coherent state, the present protocols may be widely used in large-scaled and solid-based quantum repeater and quantum information processing.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
NASA Astrophysics Data System (ADS)
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Translating landfill methane generation parameters among first-order decay models.
Krause, Max J; Chickering, Giles W; Townsend, Timothy G
2016-11-01
Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
Measurand transient signal suppressor
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1994-01-01
A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.
A Lagrangian particle model to predict the airborne spread of foot-and-mouth disease virus
NASA Astrophysics Data System (ADS)
Mayer, D.; Reiczigel, J.; Rubel, F.
Airborne spread of bioaerosols in the boundary layer over a complex terrain is simulated using a Lagrangian particle model, and applied to modelling the airborne spread of foot-and-mouth disease (FMD) virus. Two case studies are made with study domains located in a hilly region in the northwest of the Styrian capital Graz, the second largest town in Austria. Mountainous terrain as well as inhomogeneous and time varying meteorological conditions prevent from application of so far used Gaussian dispersion models, while the proposed model can handle these realistically. In the model, trajectories of several thousands of particles are computed and the distribution of virus concentration near the ground is calculated. This allows to assess risk of infection areas with respect to animal species of interest, such as cattle, swine or sheep. Meteorological input data like wind field and other variables necessary to compute turbulence were taken from the new pre-operational version of the non-hydrostatic numerical weather prediction model LMK ( Lokal-Modell-Kürzestfrist) running at the German weather service DWD ( Deutscher Wetterdienst). The LMK model provides meteorological parameters with a spatial resolution of about 2.8 km. To account for the spatial resolution of 400 m used by the Lagrangian particle model, the initial wind field is interpolated upon the finer grid by a mass consistent interpolation method. Case studies depict a significant influence of local wind systems on the spread of virus. Higher virus concentrations at the upwind side of the hills and marginal concentrations in the lee are well observable, as well as canalization effects by valleys. The study demonstrates that the Lagrangian particle model is an appropriate tool for risk assessment of airborne spread of virus by taking into account the realistic orographic and meteorological conditions.
Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation
NASA Astrophysics Data System (ADS)
Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco
2017-11-01
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
NASA Astrophysics Data System (ADS)
Oware, E. K.
2017-12-01
Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.
New optical tomographic & topographic techniques for biomedical applications
NASA Astrophysics Data System (ADS)
Buytaert, Jan
The mammalian middle ear contains the eardrum and the three auditory ossicles, and forms an impedance match between sound in air and pressure waves in the fluid of the inner ear. Without this intermediate system, with its unsurpassed efficiency and dynamic range, we would be practically deaf. Physics-based modeling of this extremely complex mechanical system is necessary to help our basic understanding of the functioning of hearing. Highly realistic models will make it possible to predict the outcome of surgical interventions and to optimize design of ossicle prostheses and active middle ear implants. To obtain such models and with realistic output, basic input data is still missing. In this dissertation I developed and used two new optical techniques to obtain two essential sets of data: accurate three-dimensional morphology of the middle ear structures, and elasticity parameters of the eardrum. The first technique is a new method for optical tomography of macroscopic biomedical objects, which makes it possible to measure the three-dimensional geometry of the middle ear ossicles and soft tissues which are connecting and suspending them. I made a new and high-resolution version of this orthogonal-plane fluorescence optical sectioning method, to obtain micrometer resolution in macroscopic specimens. The result is thus a complete 3-D model of the middle (and inner) ear of gerbil in unprecedented quality. On top of high-resolution morphological models of the middle ear structures, I applied the technique in other fields of research as well. The second device works according to a new optical profilometry technique which allows to measure shape and deformations of the eardrum and other membranes or objects. The approach is called projection moire profilometry, and creates moire interference fringes which contain the height information. I developed a setup which uses liquid crystal panels for grid projection and optical demodulation. Hence no moving parts are present and the setup is entirely digitally controlled. This measurement method is developed to determine the elasticity parameters of the eardrum in-situ. Other surface shapes however can also be measured.
Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita
2017-08-13
Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX
2015-07-01
exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs
NASA Astrophysics Data System (ADS)
Hingerl, Ferdinand F.; Wagner, Thomas; Kulik, Dmitrii A.; Kosakowski, Georg; Driesner, Thomas; Thomsen, Kaj
2010-05-01
A consortium of research groups from ETH Zurich, EPF Lausanne, the Paul Scherrer Institut and the University of Bonn collaborates in a comprehensive program of basic research on key aspects of the Enhanced Geothermal Systems (EGSs). As part of this GEOTHERM project (www.geotherm.ethz.ch), we concentrate on the fundamental investigation of thermodynamic models suitable for describing fluid-rock interactions at geothermal conditions. Predictions of the fluid-rock interaction in EGS still face several major challenges. Slight variations in the input thermodynamic and kinetic parameters may result in significant differences in the predicted mineral solubilities and stable assemblage. Realistic modeling of mineral precipitation in turn has implications onto our understanding of the permeability evolution of the geothermal reservoir, as well as the scaling in technical installations. In order to reasonably model an EGS, thermodynamic databases and activity models must be tailored to geothermal conditions. We therefore implemented in GEMS code the Pitzer formalism, which is the standard model used for computing thermodynamic excess properties of brines at elevated temperatures and pressures. This model, however, depends on a vast amount of interaction parameters, which are to a substantial extend unknown. Furthermore, a high order polynomial temperature interpolation makes extrapolation unreliable if not impossible. As an alternative we additionally implemented the EUNIQUAC activity model. EUNIQUAC requires fewer empirical fit parameters (only binary interaction parameters needed) and uses simpler and more stable temperature and pressure extrapolations. This results in an increase in computation speed, which is of crucial importance when performing coupled long term simulations of geothermal reservoirs. To achieve better performance under geothermal conditions, we are currently partly reformulating EUNIQUAC and refitting the existing parameter set. First results of the Pitzer-EUNIQUAC benchmark applied to relevant aqueous solutions at elevated temperature, pressure and ionic strength will be presented.
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
A Workshop on the Gathering of Information for Problem Formulation
1991-06-01
the Al specialists is to design "artificially intelligent" computer environments that tutor students in much the same way that a human teacher might...tuning the interface betweeen student and machine, and are using a technique of in situ development to tune the system towaid realistic user needs. 141...of transferability to new domains, while the latter suffers from extreme fragility: the inability to cope with any input not strictly conforming with
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
Thalamic neuron models encode stimulus information by burst-size modulation
Elijah, Daniel H.; Samengo, Inés; Montemurro, Marcelo A.
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons. PMID:26441623
Crops Models for Varying Environmental Conditions
NASA Technical Reports Server (NTRS)
Jones, Harry; Cavazzoni, James; Keas, Paul
2001-01-01
New variable environment Modified Energy Cascade (MEC) crop models were developed for all the Advanced Life Support (ALS) candidate crops and implemented in SIMULINK. The MEC models are based on the Volk, Bugbee, and Wheeler Energy Cascade (EC) model and are derived from more recent Top-Level Energy Cascade (TLEC) models. The MEC models simulate crop plant responses to day-to-day changes in photosynthetic photon flux, photoperiod, carbon dioxide level, temperature, and relative humidity. The original EC model allows changes in light energy but uses a less accurate linear approximation. The simulation outputs of the new MEC models for constant nominal environmental conditions are very similar to those of earlier EC models that use parameters produced by the TLEC models. There are a few differences. The new MEC models allow setting the time for seed emergence, have realistic exponential canopy growth, and have corrected harvest dates for potato and tomato. The new MEC models indicate that the maximum edible biomass per meter squared per day is produced at the maximum allowed carbon dioxide level, the nominal temperatures, and the maximum light input. Reducing the carbon dioxide level from the maximum to the minimum allowed in the model reduces crop production significantly. Increasing temperature decreases production more than it decreases the time to harvest, so productivity in edible biomass per meter squared per day is greater at nominal than maximum temperatures, The productivity in edible biomass per meter squared per day is greatest at the maximum light energy input allowed in the model, but the edible biomass produced per light energy input unit is lower than at nominal light levels. Reducing light levels increases light and power use efficiency. The MEC models suggest we can adjust the light energy day-to- day to accommodate power shortages or Lise excess power while monitoring and controlling edible biomass production.
Passive dendrites enable single neurons to compute linearly non-separable functions.
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.
Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions
Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris
2013-01-01
Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
The use of Meteonorm weather generator for climate change studies
NASA Astrophysics Data System (ADS)
Remund, J.; Müller, S. C.; Schilter, C.; Rihm, B.
2010-09-01
The global climatological database Meteonorm (www.meteonorm.com) is widely used as meteorological input for simulation of solar applications and buildings. It's a combination of a climate database, a spatial interpolation tool and a stochastic weather generator. Like this typical years with hourly or minute time resolution can be calculated for any site. The input of Meteonorm for global radiation is the Global Energy Balance Archive (GEBA, http://proto-geba.ethz.ch). All other meteorological parameters are taken from databases of WMO and NCDC (periods 1961-90 and 1996-2005). The stochastic generation of global radiation is based on a Markov chain model for daily values and an autoregressive model for hourly and minute values (Aguiar and Collares-Pereira, 1988 and 1992). The generation of temperature is based on global radiation and measured distribution of daily temperature values of approx. 5000 sites. Meteonorm generates also additional parameters like precipitation, wind speed or radiation parameters like diffuse and direct normal irradiance. Meteonorm can also be used for climate change studies. Instead of climate values, the results of IPCC AR4 results are used as input. From all 18 public models an average has been made at a resolution of 1°. The anomalies of the parameters temperature, precipitation and global radiation and the three scenarios B1, A1B and A2 have been included. With the combination of Meteonorm's current database 1961-90, the interpolation algorithms and the stochastic generation typical years can be calculated for any site, for different scenarios and for any period between 2010 and 2200. From the analysis of variations of year to year and month to month variations of temperature, precipitation and global radiation of the past ten years as well of climate model forecasts (from project prudence, http://prudence.dmi.dk) a simple autoregressive model has been formed which is used to generate realistic monthly time series of future periods. Meteonorm can therefore be used as a relatively simple method to enhance the spatial and temporal resolution instead of using complicated and time consuming downscaling methods based on regional climate models. The combination of Meteonorm, gridded historical (based on work of Luterbach et al.) and IPCC results has been used for studies of vegetation simulation between 1660 and 2600 (publication of first version based on IS92a scenario and limited time period 1950 - 2100: http://www.pbl.nl/images/H5_Part2_van%20CCE_opmaak%28def%29_tcm61-46625.pdf). It's also applicable for other adaptation studies for e.g. road surfaces or building simulation. In Meteonorm 6.1 one scenario (IS92a) and one climate model has been included (Hadley CM3). In the new Meteonorm 7 (coming spring 2011) the model averages of the three above mentioned scenarios of the IPCC AR4 will be included.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Application of artificial neural networks to assess pesticide contamination in shallow groundwater
Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.
2006-01-01
In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Comparisons of Solar Wind Coupling Parameters with Auroral Energy Deposition Rates
NASA Technical Reports Server (NTRS)
Elsen, R.; Brittnacher, M. J.; Fillingim, M. O.; Parks, G. K.; Germany G. A.; Spann, J. F., Jr.
1997-01-01
Measurement of the global rate of energy deposition in the ionosphere via auroral particle precipitation is one of the primary goals of the Polar UVI program and is an important component of the ISTP program. The instantaneous rate of energy deposition for the entire month of January 1997 has been calculated by applying models to the UVI images and is presented by Fillingim et al. In this session. A number of parameters that predict the rate of coupling of solar wind energy into the magnetosphere have been proposed in the last few decades. Some of these parameters, such as the epsilon parameter of Perrault and Akasofu, depend on the instantaneous values in the solar wind. Other parameters depend on the integrated values of solar wind parameters, especially IMF Bz, e.g. applied flux which predicts the net transfer of magnetic flux to the tail. While these parameters have often been used successfully with substorm studies, their validity in terms of global energy input has not yet been ascertained, largely because data such as that supplied by the ISTP program was lacking. We have calculated these and other energy coupling parameters for January 1997 using solar wind data provided by WIND and other solar wind monitors. The rates of energy input predicted by these parameters are compared to those measured through UVI data and correlations are sought. Whether these parameters are better at providing an instantaneous rate of energy input or an average input over some time period is addressed. We also study if either type of parameter may provide better correlations if a time delay is introduced; if so, this time delay may provide a characteristic time for energy transport in the coupled solar wind-magnetosphere-ionosphere system.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 andmore » B.2.« less
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
Technical Note: Artificial coral reef mesocosms for ocean acidification investigations
NASA Astrophysics Data System (ADS)
Leblud, J.; Moulin, L.; Batigny, A.; Dubois, P.; Grosjean, P.
2014-11-01
The design and evaluation of replicated artificial mesocosms are presented in the context of a thirteen month experiment on the effects of ocean acidification on tropical coral reefs. They are defined here as (semi)-closed (i.e. with or without water change from the reef) mesocosms in the laboratory with a more realistic physico-chemical environment than microcosms. Important physico-chemical parameters (i.e. pH, pO2, pCO2, total alkalinity, temperature, salinity, total alkaline earth metals and nutrients availability) were successfully monitored and controlled. Daily variations of irradiance and pH were applied to approach field conditions. Results highlighted that it was possible to maintain realistic physico-chemical parameters, including daily changes, into artificial mesocosms. On the other hand, the two identical artificial mesocosms evolved differently in terms of global community oxygen budgets although the initial biological communities and physico-chemical parameters were comparable. Artificial reef mesocosms seem to leave enough degrees of freedom to the enclosed community of living organisms to organize and change along possibly diverging pathways.
A baroclinic quasigeostrophic open ocean model
NASA Technical Reports Server (NTRS)
Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.
1983-01-01
A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.
NASA Astrophysics Data System (ADS)
Seo, Hyeon; Kim, Donghyeon; Jun, Sung Chan
2016-06-01
Electrical brain stimulation (EBS) is an emerging therapy for the treatment of neurological disorders, and computational modeling studies of EBS have been used to determine the optimal parameters for highly cost-effective electrotherapy. Recent notable growth in computing capability has enabled researchers to consider an anatomically realistic head model that represents the full head and complex geometry of the brain rather than the previous simplified partial head model (extruded slab) that represents only the precentral gyrus. In this work, subdural cortical stimulation (SuCS) was found to offer a better understanding of the differential activation of cortical neurons in the anatomically realistic full-head model than in the simplified partial-head models. We observed that layer 3 pyramidal neurons had comparable stimulation thresholds in both head models, while layer 5 pyramidal neurons showed a notable discrepancy between the models; in particular, layer 5 pyramidal neurons demonstrated asymmetry in the thresholds and action potential initiation sites in the anatomically realistic full-head model. Overall, the anatomically realistic full-head model may offer a better understanding of layer 5 pyramidal neuronal responses. Accordingly, the effects of using the realistic full-head model in SuCS are compelling in computational modeling studies, even though this modeling requires substantially more effort.
Knowledge system and method for simulating chemical controlled release device performance
Cowan, Christina E.; Van Voris, Peter; Streile, Gary P.; Cataldo, Dominic A.; Burton, Frederick G.
1991-01-01
A knowledge system for simulating the performance of a controlled release device is provided. The system includes an input device through which the user selectively inputs one or more data parameters. The data parameters comprise first parameters including device parameters, media parameters, active chemical parameters and device release rate; and second parameters including the minimum effective inhibition zone of the device and the effective lifetime of the device. The system also includes a judgemental knowledge base which includes logic for 1) determining at least one of the second parameters from the release rate and the first parameters and 2) determining at least one of the first parameters from the other of the first parameters and the second parameters. The system further includes a device for displaying the results of the determinations to the user.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
NASA Technical Reports Server (NTRS)
Batterson, James G. (Technical Monitor); Morelli, E. A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.
Synaptic control of the shape of the motoneuron pool input-output function
Heckman, Charles J.
2017-01-01
Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245
Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blankenship, Doug; Sonnenthal, Eric
Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
From grid cells to place cells with realistic field sizes
2017-01-01
While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Reliability of system for precise cold forging
NASA Astrophysics Data System (ADS)
Krušič, Vid; Rodič, Tomaž
2017-07-01
The influence of scatter of principal input parameters of the forging system on the dimensional accuracy of product and on the tool life for closed-die forging process is presented in this paper. Scatter of the essential input parameters for the closed-die upsetting process was adjusted to the maximal values that enabled the reliable production of a dimensionally accurate product at optimal tool life. An operating window was created in which exists the maximal scatter of principal input parameters for the closed-die upsetting process that still ensures the desired dimensional accuracy of the product and the optimal tool life. Application of the adjustment of the process input parameters is shown on the example of making an inner race of homokinetic joint from mass production. High productivity in manufacture of elements by cold massive extrusion is often achieved by multiple forming operations that are performed simultaneously on the same press. By redesigning the time sequences of forming operations at multistage forming process of starter barrel during the working stroke the course of the resultant force is optimized.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Mastin, Larry G.; Guffanti, Marianne C.; Servranckx, R.; Webley, P.; Barsotti, S.; Dean, K.; Durant, A.; Ewert, John W.; Neri, A.; Rose, W.I.; Schneider, David J.; Siebert, L.; Stunder, B.; Swanson, G.; Tupper, A.; Volentik, A.; Waythomas, Christopher F.
2009-01-01
During volcanic eruptions, volcanic ash transport and dispersion models (VATDs) are used to forecast the location and movement of ash clouds over hours to days in order to define hazards to aircraft and to communities downwind. Those models use input parameters, called “eruption source parameters”, such as plume height H, mass eruption rate Ṁ, duration D, and the mass fraction m63 of erupted debris finer than about 4ϕ or 63 μm, which can remain in the cloud for many hours or days. Observational constraints on the value of such parameters are frequently unavailable in the first minutes or hours after an eruption is detected. Moreover, observed plume height may change during an eruption, requiring rapid assignment of new parameters. This paper reports on a group effort to improve the accuracy of source parameters used by VATDs in the early hours of an eruption. We do so by first compiling a list of eruptions for which these parameters are well constrained, and then using these data to review and update previously studied parameter relationships. We find that the existing scatter in plots of H versus Ṁ yields an uncertainty within the 50% confidence interval of plus or minus a factor of four in eruption rate for a given plume height. This scatter is not clearly attributable to biases in measurement techniques or to well-recognized processes such as elutriation from pyroclastic flows. Sparse data on total grain-size distribution suggest that the mass fraction of fine debris m63 could vary by nearly two orders of magnitude between small basaltic eruptions (∼ 0.01) and large silicic ones (> 0.5). We classify eleven eruption types; four types each for different sizes of silicic and mafic eruptions; submarine eruptions; “brief” or Vulcanian eruptions; and eruptions that generate co-ignimbrite or co-pyroclastic flow plumes. For each eruption type we assign source parameters. We then assign a characteristic eruption type to each of the world's ∼ 1500 Holocene volcanoes. These eruption types and associated parameters can be used for ash-cloud modeling in the event of an eruption, when no observational constraints on these parameters are available.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
Informing soil models using pedotransfer functions: challenges and perspectives
NASA Astrophysics Data System (ADS)
Pachepsky, Yakov; Romano, Nunzio
2015-04-01
Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
NASA Astrophysics Data System (ADS)
Šimkanin, Ján; Kyselica, Juraj
2017-12-01
Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.
NASA Astrophysics Data System (ADS)
Shukla, Hemant; Bonissent, Alain
2017-04-01
We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.
Continuous rainfall simulation for regional flood risk assessment - application in the Austrian Alps
NASA Astrophysics Data System (ADS)
Salinas, Jose Luis; Nester, Thomas; Komma, Jürgen; Blöschl, Günter
2017-04-01
Generation of realistic synthetic spatial rainfall is of pivotal importance for assessing regional hydroclimatic hazard as the input for long term rainfall-runoff simulations. The correct reproduction of the observed rainfall characteristics, such as regional intensity-duration-frequency curves, is necessary to adequately model the magnitude and frequency of the flood peaks. Furthermore, the replication of the observed rainfall spatial and temporal correlations allows to model important other hydrological features like antecedent soil moisture conditions before extreme rainfall events. In this work, we present an application in the Tirol region (Austrian alps) of a modification of the model presented by Bardossy and Platte (1992), where precipitation is modeled on a station basis as a mutivariate autoregressive model (mAr) in a Normal space, and then transformed to a Gamma-distributed space. For the sake of simplicity, the parameters of the Gamma distributions are assumed to vary monthly according to a sinusoidal function, and are calibrated trying to simultaneously reproduce i) mean annual rainfall, ii) mean daily rainfall amounts, iii) standard deviations of daily rainfall amounts, and iv) 24-hours intensity duration frequency curve. The calibration of the spatial and temporal correlation parameters is performed in a way that the intensity-duration-frequency curves aggregated at different spatial and temporal scales reproduce the measured ones. Bardossy, A., and E. J. Plate (1992), Space-time model for daily rainfall using atmospheric circulation patterns, Water Resour. Res., 28(5), 1247-1259, doi:10.1029/91WR02589.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Bari, Quazi H; Koenig, Albert
2012-11-01
The aeration rate is a key process control parameter in the forced aeration composting process because it greatly affects different physico-chemical parameters such as temperature and moisture content, and indirectly influences the biological degradation rate. In this study, the effect of a constant airflow rate on vertical temperature distribution and organic waste degradation in the composting mass is analyzed using a previously developed mathematical model of the composting process. The model was applied to analyze the effect of two different ambient conditions, namely, hot and cold ambient condition, and four different airflow rates such as 1.5, 3.0, 4.5, and 6.0 m(3) m(-2) h(-1), respectively, on the temperature distribution and organic waste degradation in a given waste mixture. The typical waste mixture had 59% moisture content and 96% volatile solids, however, the proportion could be varied as required. The results suggested that the model could be efficiently used to analyze composting under variable ambient and operating conditions. A lower airflow rate around 1.5-3.0 m(3) m(-2) h(-1) was found to be suitable for cold ambient condition while a higher airflow rate around 4.5-6.0 m(3) m(-2) h(-1) was preferable for hot ambient condition. The engineered way of application of this model is flexible which allows the changes in any input parameters within the realistic range. It can be widely used for conceptual process design, studies on the effect of ambient conditions, optimization studies in existing composting plants, and process control. Copyright © 2012 Elsevier Ltd. All rights reserved.
Toward more realistic projections of soil carbon dynamics by Earth system models
Luo, Y.; Ahlström, Anders; Allison, Steven D.; Batjes, Niels H.; Brovkin, V.; Carvalhais, Nuno; Chappell, Adrian; Ciais, Philippe; Davidson, Eric A.; Finzi, Adien; Georgiou, Katerina; Guenet, Bertrand; Hararuk, Oleksandra; Harden, Jennifer; He, Yujie; Hopkins, Francesca; Jiang, L.; Koven, Charles; Jackson, Robert B.; Jones, Chris D.; Lara, M.; Liang, J.; McGuire, A. David; Parton, William; Peng, Changhui; Randerson, J.; Salazar, Alejandro; Sierra, Carlos A.; Smith, Matthew J.; Tian, Hanqin; Todd-Brown, Katherine E. O; Torn, Margaret S.; van Groenigen, Kees Jan; Wang, Ying; West, Tristram O.; Wei, Yaxing; Wieder, William R.; Xia, Jianyang; Xu, Xia; Xu, Xiaofeng; Zhou, T.
2016-01-01
Soil carbon (C) is a critical component of Earth system models (ESMs), and its diverse representations are a major source of the large spread across models in the terrestrial C sink from the third to fifth assessment reports of the Intergovernmental Panel on Climate Change (IPCC). Improving soil C projections is of a high priority for Earth system modeling in the future IPCC and other assessments. To achieve this goal, we suggest that (1) model structures should reflect real-world processes, (2) parameters should be calibrated to match model outputs with observations, and (3) external forcing variables should accurately prescribe the environmental conditions that soils experience. First, most soil C cycle models simulate C input from litter production and C release through decomposition. The latter process has traditionally been represented by first-order decay functions, regulated primarily by temperature, moisture, litter quality, and soil texture. While this formulation well captures macroscopic soil organic C (SOC) dynamics, better understanding is needed of their underlying mechanisms as related to microbial processes, depth-dependent environmental controls, and other processes that strongly affect soil C dynamics. Second, incomplete use of observations in model parameterization is a major cause of bias in soil C projections from ESMs. Optimal parameter calibration with both pool- and flux-based data sets through data assimilation is among the highest priorities for near-term research to reduce biases among ESMs. Third, external variables are represented inconsistently among ESMs, leading to differences in modeled soil C dynamics. We recommend the implementation of traceability analyses to identify how external variables and model parameterizations influence SOC dynamics in different ESMs. Overall, projections of the terrestrial C sink can be substantially improved when reliable data sets are available to select the most representative model structure, constrain parameters, and prescribe forcing fields.
How much expert knowledge is it worth to put in conceptual hydrological models?
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Zappa, Massimiliano
2017-04-01
Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.
A novel method for predicting the power outputs of wave energy converters
NASA Astrophysics Data System (ADS)
Wang, Yingguang
2018-03-01
This paper focuses on realistically predicting the power outputs of wave energy converters operating in shallow water nonlinear waves. A heaving two-body point absorber is utilized as a specific calculation example, and the generated power of the point absorber has been predicted by using a novel method (a nonlinear simulation method) that incorporates a second order random wave model into a nonlinear dynamic filter. It is demonstrated that the second order random wave model in this article can be utilized to generate irregular waves with realistic crest-trough asymmetries, and consequently, more accurate generated power can be predicted by subsequently solving the nonlinear dynamic filter equation with the nonlinearly simulated second order waves as inputs. The research findings demonstrate that the novel nonlinear simulation method in this article can be utilized as a robust tool for ocean engineers in their design, analysis and optimization of wave energy converters.
Geostatistical borehole image-based mapping of karst-carbonate aquifer pores
Michael Sukop,; Cunningham, Kevin J.
2016-01-01
Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes.
2017-01-01
Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells. PMID:28920081
Futscher, Moritz H; Ehrler, Bruno
2017-09-08
Perovskite/Si tandem solar cells have the potential to considerably out-perform conventional solar cells. Under standard test conditions, perovskite/Si tandem solar cells already outperform the Si single junction. Under realistic conditions, however, as we show, tandem solar cells made from current record cells are hardly more efficient than the Si cell alone. We model the performance of realistic perovskite/Si tandem solar cells under real-world climate conditions, by incorporating parasitic cell resistances, nonradiative recombination, and optical losses into the detailed-balance limit. We show quantitatively that when optimizing these parameters in the perovskite top cell, perovskite/Si tandem solar cells could reach efficiencies above 38% under realistic conditions, even while leaving the Si cell untouched. Despite the rapid efficiency increase of perovskite solar cells, our results emphasize the need for further material development, careful device design, and light management strategies, all necessary for highly efficient perovskite/Si tandem solar cells.
Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2
NASA Technical Reports Server (NTRS)
Mann, F. I.; Horsewood, J. L.
1978-01-01
The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.
Generating Scenarios When Data Are Missing
NASA Technical Reports Server (NTRS)
Mackey, Ryan
2007-01-01
The Hypothetical Scenario Generator (HSG) is being developed in conjunction with other components of artificial-intelligence systems for automated diagnosis and prognosis of faults in spacecraft, aircraft, and other complex engineering systems. The HSG accepts, as input, possibly incomplete data on the current state of a system (see figure). The HSG models a potential fault scenario as an ordered disjunctive tree of conjunctive consequences, wherein the ordering is based upon the likelihood that a particular conjunctive path will be taken for the given set of inputs. The computation of likelihood is based partly on a numerical ranking of the degree of completeness of data with respect to satisfaction of the antecedent conditions of prognostic rules. The results from the HSG are then used by a model-based artificial- intelligence subsystem to predict realistic scenarios and states.
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.
2008-01-01
A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...
2016-01-06
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Properties of the electron cloud in a high-energy positron and electron storage ring
Harkay, K. C.; Rosenberg, R. A.
2003-03-20
Low-energy, background electrons are ubiquitous in high-energy particle accelerators. Under certain conditions, interactions between this electron cloud and the high-energy beam can give rise to numerous effects that can seriously degrade the accelerator performance. These effects range from vacuum degradation to collective beam instabilities and emittance blowup. Although electron-cloud effects were first observed two decades ago in a few proton storage rings, they have in recent years been widely observed and intensely studied in positron and proton rings. Electron-cloud diagnostics developed at the Advanced Photon Source enabled for the first time detailed, direct characterization of the electron-cloud properties in amore » positron and electron storage ring. From in situ measurements of the electron flux and energy distribution at the vacuum chamber wall, electron-cloud production mechanisms and details of the beam-cloud interaction can be inferred. A significant longitudinal variation of the electron cloud is also observed, due primarily to geometrical details of the vacuum chamber. Furthermore, such experimental data can be used to provide realistic limits on key input parameters in modeling efforts, leading ultimately to greater confidence in predicting electron-cloud effects in future accelerators.« less
User's Guide for the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS)
NASA Technical Reports Server (NTRS)
Frederick, Dean K.; DeCastro, Jonathan A.; Litt, Jonathan S.
2007-01-01
This report is a Users Guide for the NASA-developed Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) software, which is a transient simulation of a large commercial turbofan engine (up to 90,000-lb thrust) with a realistic engine control system. The software supports easy access to health, control, and engine parameters through a graphical user interface (GUI). C-MAPSS provides the user with a graphical turbofan engine simulation environment in which advanced algorithms can be implemented and tested. C-MAPSS can run user-specified transient simulations, and it can generate state-space linear models of the nonlinear engine model at an operating point. The code has a number of GUI screens that allow point-and-click operation, and have editable fields for user-specified input. The software includes an atmospheric model which allows simulation of engine operation at altitudes from sea level to 40,000 ft, Mach numbers from 0 to 0.90, and ambient temperatures from -60 to 103 F. The package also includes a power-management system that allows the engine to be operated over a wide range of thrust levels throughout the full range of flight conditions.
Scheme for the generation of freely traveling optical trio coherent states
NASA Astrophysics Data System (ADS)
Duc, Truong Minh; Dat, Tran Quang; An, Nguyen Ba; Kim, Jaewan
2013-08-01
Trio coherent states (TCSs) are non-Gaussian three-mode entangled states which can serve as a useful resource for continuous-variable quantum tasks, so their generation is of primary importance. Schemes exist to generate stable TCSs in terms of vibrational motion of a trapped ion inside a crystal. However, to perform quantum communication and distributed quantum computation the states should be shared beforehand among distant parties. That is, their modes should be able to be directed to different desired locations in space. In this work, we propose an experimental setup to generate such free-traveling TCSs in terms of optical fields. Our scheme uses standard physical resources, such as coherent states, balanced beam splitters, phase shifters, nonideal on-off photodetectors, and realistic weak cross-Kerr nonlinearities, without the need of single photons or homodyne or heterodyne measurements. We study the dependences of the fidelity of the state generated by our scheme with respect to the target TCS and the corresponding generation probability for the parameters involved. In theory, the fidelity could be nearly perfect for whatever weak nonlinearities τ and low photodetector efficiency η, provided that the amplitude |α| of an input coherent state is large enough, namely, |α|≥5/(ητ).
Band Offsets at the Interface between Crystalline and Amorphous Silicon from First Principles
NASA Astrophysics Data System (ADS)
Jarolimek, K.; Hazrati, E.; de Groot, R. A.; de Wijs, G. A.
2017-07-01
The band offsets between crystalline and hydrogenated amorphous silicon (a -Si ∶H ) are key parameters governing the charge transport in modern silicon heterojunction solar cells. They are an important input for macroscopic simulators that are used to further optimize the solar cell. Past experimental studies, using x-ray photoelectron spectroscopy (XPS) and capacitance-voltage measurements, have yielded conflicting results on the band offset. Here, we present a computational study on the band offsets. It is based on atomistic models and density-functional theory (DFT). The amorphous part of the interface is obtained by relatively long DFT first-principles molecular-dynamics runs at an elevated temperature on 30 statistically independent samples. In order to obtain a realistic conduction-band position the electronic structure of the interface is calculated with a hybrid functional. We find a slight asymmetry in the band offsets, where the offset in the valence band (0.29 eV) is larger than in the conduction band (0.17 eV). Our results are in agreement with the latest XPS measurements that report a valence-band offset of 0.3 eV [M. Liebhaber et al., Appl. Phys. Lett. 106, 031601 (2015), 10.1063/1.4906195].
Control Theoretic Modeling and Generated Flow Patterns of a Fish-Tail Robot
NASA Astrophysics Data System (ADS)
Massey, Brian; Morgansen, Kristi; Dabiri, Dana
2003-11-01
Many real-world engineering problems involve understanding and manipulating fluid flows. One of the challenges to further progress in the area of active flow control is the lack of appropriate models that are amenable to control-theoretic studies and algorithm design and also incorporate reasonably realistic fluid dynamic effects. We focus here on modeling and model-verification of bio-inspired actuators (fish-fin type structures) used to control fluid dynamic artifacts that will affect speed, agility, and stealth of Underwater Autonomous Vehicles (UAVs). Vehicles using fish-tail type systems are more maneuverable, can turn in much shorter and more constrained spaces, have lower drag, are quieter and potentially more efficient than those using propellers. We will present control-theoretic models for a simple prototype coupled fluid and mechanical actuator where fluid effects are crudely modeled by assuming only lift, drag, and added mass, while neglecting boundary effects. These models will be tested with different control input parameters on an experimental fish-tail robot with the resulting flow captured with DPIV. Relations between the model, the control function choices, the obtained thrust and drag, and the corresponding flow patterns will be presented and discussed.
A TCP model for external beam treatment of intermediate-risk prostate cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walsh, Sean; Putten, Wil van der
2013-03-15
Purpose: Biological models offer the ability to predict clinical outcomes. The authors describe a model to predict the clinical response of intermediate-risk prostate cancer to external beam radiotherapy for a variety of fractionation regimes. Methods: A fully heterogeneous population averaged tumor control probability model was fit to clinical outcome data for hyper, standard, and hypofractionated treatments. The tumor control probability model was then employed to predict the clinical outcome of extreme hypofractionation regimes, as utilized in stereotactic body radiotherapy. Results: The tumor control probability model achieves an excellent level of fit, R{sup 2} value of 0.93 and a root meanmore » squared error of 1.31%, to the clinical outcome data for hyper, standard, and hypofractionated treatments using realistic values for biological input parameters. Residuals Less-Than-Or-Slanted-Equal-To 1.0% are produced by the tumor control probability model when compared to clinical outcome data for stereotactic body radiotherapy. Conclusions: The authors conclude that this tumor control probability model, used with the optimized radiosensitivity values obtained from the fit, is an appropriate mechanistic model for the analysis and evaluation of external beam RT plans with regard to tumor control for these clinical conditions.« less
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2015-04-01
Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.
Elastic properties of continental carbonate rocks: controlling factors and applicable model
NASA Astrophysics Data System (ADS)
Regnet, Jean-Baptiste; Fortin, Jérôme; Guéguen, Yves; Pellerin, Matthieu
2016-04-01
Continental carbonates gained interest following the discovery of the supergiant field in the post- and pre-salt deposits in offshore Brazil, as they account for a large portion of the deepwater production. The genesis of continental carbonates is generally associated with physico-chemical and biological precipitation of carbonates, coupled with a strong influence of clastic mineralogical inputs. This results in a complex mineralogical mixing, associated with a wide heterogeneity of pore types due to the intense diagenetic overprint potential of carbonate deposits (cementation, dissolution, recrystallisation, dolomitisation...). With that in mind, we propose insights on the controling factors of elastic properties in a continental carbonate dataset, analogue of the brazilian pre-salt deposits. An applicable model based on the effective medium theory is proposed and discussed regarding the experimental results, and try to account for the wide variability of the elastic properties. Analyzed samples exhibit large variation of (1) sedimentary texture (coquinas grainstones, muddy facies (mudstones to packtones), travertines and stromatolites, (2) pore types (moldic, intercrystalline, vuggy and micropores) and shapes (aspect ratio values fall between 0.2 and 0.5) and (3) physical properties (porosity, acoustic velocity). Regarding composition, samples are characterized by three major mineralogical assemblages, from pure calcite, dolomite, to quartz/clay mixing. If porosity is clearly the first order parameter controlling P-wave velocities, the mineralogical overprint can be accounted for the wide variability of the p-wave velocities at a given porosity (figure 1). The lower porosity-velocity relationship trend is dominated by samples with low to large quartz/clay proportions, whereas the higher trend is dominated by dolomitized samples. Two input parameters are extracted from the previous experimental observation: porosity and mineralogical composition of each sample. The pore aspect ratio is used as a fitting parameter, but always takes realistic value, falling closely in the range of the one extracted from the samples. Figure 2 shows that prediction is fairly good. For porosity lower than 20%, the predicted velocity is 9% accurate with an aspect ratio of 0.15. For higher porosity, the prediction is 13% accurate with an aspect ratio of 0.3. All in all, the used analytical predictions of the acoustic wave velocities are in very good agreements with the experimental data.
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Farda, A.; Huth, R.
2012-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series and then modified (in case of simulations for the future climate) according to the GCM- or RCM-based climate change scenarios. The present contribution uses the parametric daily weather generator M&Rfi to follow two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate/CZ (v.2) Regional Climate Model at 25 km resolution. The WG parameters will be derived from the RCM-simulated surface weather series and compared to those derived from observational data in the Czech meteorological stations. The set of WG parameters will include selected statistics of the surface temperature and precipitation (characteristics of the mean, variability, interdiurnal variability and extremes). (2) Testing a potential of RCM output for calibration of the WG for the ungauged locations. The methodology being examined will consist in using the WG, whose parameters are interpolated from the surrounding stations and then corrected based on a RCM-simulated spatial variability. The quality of the weather series produced by the WG calibrated in this way will be assessed in terms of selected climatic characteristics focusing on extreme precipitation and temperature characteristics (including characteristics of dry/wet/hot/cold spells). Acknowledgements: The present experiment is made within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports) and VALUE (COST ES 1102 action).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Ben; Zhang, Yaocun; Qian, Yun
In this study, we apply an efficient sampling approach and conduct a large number of simulations to explore the sensitivity of the simulated Asian summer monsoon (ASM) precipitation, including the climatological state and interannual variability, to eight parameters related to the cloud and precipitation processes in the Beijing Climate Center AGCM version 2.1 (BCC_AGCM2.1). Our results show that BCC_AGCM2.1 has large biases in simulating the ASM precipitation. The precipitation efficiency and evaporation coefficient for deep convection are the most sensitive parameters in simulating the ASM precipitation. With optimal parameter values, the simulated precipitation climatology could be remarkably improved, e.g. increasedmore » precipitation over the equator Indian Ocean, suppressed precipitation over the Philippine Sea, and more realistic Meiyu distribution over Eastern China. The ASM precipitation interannual variability is further analyzed, with a focus on the ENSO impacts. It shows the simulations with better ASM precipitation climatology can also produce more realistic precipitation anomalies during El Niño decaying summer. In the low-skill experiments for precipitation climatology, the ENSO-induced precipitation anomalies are most significant over continents (vs. over ocean in observation) in the South Asian monsoon region. More realistic results are derived from the higher-skill experiments with stronger anomalies over the Indian Ocean and weaker anomalies over India and the western Pacific, favoring more evident easterly anomalies forced by the tropical Indian Ocean warming and stronger Indian Ocean-western Pacific tele-connection as observed. Our model results reveal a strong connection between the simulated ASM precipitation climatological state and interannual variability in BCC_AGCM2.1 when key parameters are perturbed.« less
Update on ɛK with lattice QCD inputs
NASA Astrophysics Data System (ADS)
Jang, Yong-Chull; Lee, Weonjong; Lee, Sunkyu; Leem, Jaehoon
2018-03-01
We report updated results for ɛK, the indirect CP violation parameter in neutral kaons, which is evaluated directly from the standard model with lattice QCD inputs. We use lattice QCD inputs to fix B\\hatk,|Vcb|,ξ0,ξ2,|Vus|, and mc(mc). Since Lattice 2016, the UTfit group has updated the Wolfenstein parameters in the angle-only-fit method, and the HFLAV group has also updated |Vcb|. Our results show that the evaluation of ɛK with exclusive |Vcb| (lattice QCD inputs) has 4.0σ tension with the experimental value, while that with inclusive |Vcb| (heavy quark expansion based on OPE and QCD sum rules) shows no tension.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Cavarretta, Francesco; Marasco, Addolorata; Hines, Michael L; Shepherd, Gordon M; Migliore, Michele
2016-01-01
The olfactory bulb processes inputs from olfactory receptor neurons (ORNs) through two levels: the glomerular layer at the site of input, and the granule cell level at the site of output to the olfactory cortex. The sequence of action of these two levels has not yet been examined. We analyze this issue using a novel computational framework that is scaled up, in three-dimensions (3D), with realistic representations of the interactions between layers, activated by simulated natural odors, and constrained by experimental and theoretical analyses. We suggest that the postulated functions of glomerular circuits have as their primary role transforming a complex and disorganized input into a contrast-enhanced and normalized representation, but cannot provide for synchronization of the distributed glomerular outputs. By contrast, at the granule cell layer, the dendrodendritic interactions mediate temporal decorrelation, which we show is dependent on the preceding contrast enhancement by the glomerular layer. The results provide the first insights into the successive operations in the olfactory bulb, and demonstrate the significance of the modular organization around glomeruli. This layered organization is especially important for natural odor inputs, because they activate many overlapping glomeruli.
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Mejía, Annie C.; Durisen, Richard H.; Cai, Kai; Pickett, Megan K.; D'Alessio, Paola
2006-11-01
This paper presents a fully three-dimensional radiative hydrodymanics simulation with realistic opacities for a gravitationally unstable 0.07 Msolar disk around a 0.5 Msolar star. We address the following aspects of disk evolution: the strength of gravitational instabilities under realistic cooling, mass transport in the disk that arises from GIs, comparisons between the gravitational and Reynolds stresses measured in the disk and those expected in an α-disk, and comparisons between the SED derived for the disk and SEDs derived from observationally determined parameters. The mass transport in this disk is dominated by global modes, and the cooling times are too long to permit fragmentation for all radii. Moreover, our results suggest a plausible explanation for the FU Ori outburst phenomenon.
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
NASA Astrophysics Data System (ADS)
Ma, Wei-Ming
1997-06-01
An analytical two-flow model is derived from the radiative transfer equation to simulate the distribution of irradiance in coastal waters with a wind-roughed surface and bottom reflectance. The model utilizes unique boundary conditions, including the surface slope of the downwelling and upwelling irradiance as well as the influence of wind and bottom reflectance on simulated surface reflectance. The developed model provides a simple mathematical concept for understanding the irradiant light flux and associated processes in coastal or fresh water as well as turbid estuarine waters. The model is applied to data from the Banana River and coastal Atlantic Ocean water off the east coast of central Florida, USA. The two-flow irradiance model is capable of simulating realistic above-surface reflectance signatures under wind-roughened air-water surface given realistic input parameters including a specular flux conversion coefficient, absorption coefficient, backscattering coefficient, atmospheric visibility, bottom reflectance, and water depth. The root-mean-squared error of the calculated above-surface reflectances is approximately 3% in the Banana River and is less than 15% in coastal Atlantic Ocean off the east of Florida. Result of the subsurface reflectance sensitivity analysis indicates that the specular conversion coefficient is the most sensitive parameter in the model, followed by the beam attenuation coefficient, absorption coefficient, water depth, backscattering coefficient, specular irradiance, diffuse irradiance, bottom reflectance, and wind speed. On the other hand, result of the above-surface reflectance sensitivity analysis indicates that the wind speed is the most important parameter, followed by bottom reflectance, attenuation coefficient, water depth, conversion coefficient, specular irradiance, downwelling irradiance, absorption coefficient, and backscattering coefficient. Model results depend on the accuracy of these parameters to a large degree and more important the water depth and value of the bottom reflectance. The results of this work indicates little change of subsurface or in-water reflectances, due to variations of wind speed and observation angle. Simulations of the wind effect on the total downwelling irradiance from the two- flow model indicates that the total downwelling irradiance just below a wind-roughened water surface increases to about 1% of the total downwelling irradiance on a calm water surface when the sun is near zenith and increases to about 3% when the sun is near the horizon. This analytically based model, solved or developed utilizing the unique boundary conditions, can be applied to remote sensing of oceanic upper mixed layer dynamics, plant canopies, primary production, and shallow water environments with different bottom type reflectances. Future applications may include determining effects of sediment resuspension of bottom sediments in the bottom boundary layer on remotely sensed data.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Theoretic aspects of the identification of the parameters in the optimal control model
NASA Technical Reports Server (NTRS)
Vanwijk, R. A.; Kok, J. J.
1977-01-01
The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
Dual side control for inductive power transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron
An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less
NASA Astrophysics Data System (ADS)
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while discharge is more affected by parameters from the whole upstream drainage area. Understanding model output variance behavior will have a direct impact on the design and performance of the ensemble-based data assimilation platform, for which uncertainties are also modeled by variances. It will help to select more objectively RRM parameters to correct.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
Two-compartment modeling of tissue microcirculation revisited.
Brix, Gunnar; Salehi Ravesh, Mona; Griebel, Jürgen
2017-05-01
Conventional two-compartment modeling of tissue microcirculation is used for tracer kinetic analysis of dynamic contrast-enhanced (DCE) computed tomography or magnetic resonance imaging studies although it is well-known that the underlying assumption of an instantaneous mixing of the administered contrast agent (CA) in capillaries is far from being realistic. It was thus the aim of the present study to provide theoretical and computational evidence in favor of a conceptually alternative modeling approach that makes it possible to characterize the bias inherent to compartment modeling and, moreover, to approximately correct for it. Starting from a two-region distributed-parameter model that accounts for spatial gradients in CA concentrations within blood-tissue exchange units, a modified lumped two-compartment exchange model was derived. It has the same analytical structure as the conventional two-compartment model, but indicates that the apparent blood flow identifiable from measured DCE data is substantially overestimated, whereas the three other model parameters (i.e., the permeability-surface area product as well as the volume fractions of the plasma and interstitial distribution space) are unbiased. Furthermore, a simple formula was derived to approximately compute a bias-corrected flow from the estimates of the apparent flow and permeability-surface area product obtained by model fitting. To evaluate the accuracy of the proposed modeling and bias correction method, representative noise-free DCE curves were analyzed. They were simulated for 36 microcirculation and four input scenarios by an axially distributed reference model. As analytically proven, the considered two-compartment exchange model is structurally identifiable from tissue residue data. The apparent flow values estimated for the 144 simulated tissue/input scenarios were considerably biased. After bias-correction, the deviations between estimated and actual parameter values were (11.2 ± 6.4) % (vs. (105 ± 21) % without correction) for the flow, (3.6 ± 6.1) % for the permeability-surface area product, (5.8 ± 4.9) % for the vascular volume and (2.5 ± 4.1) % for the interstitial volume; with individual deviations of more than 20% being the exception and just marginal. Increasing the duration of CA administration only had a statistically significant but opposite effect on the accuracy of the estimated flow (declined) and intravascular volume (improved). Physiologically well-defined tissue parameters are structurally identifiable and accurately estimable from DCE data by the conceptually modified two-compartment model in combination with the bias correction. The accuracy of the bias-corrected flow is nearly comparable to that of the three other (theoretically unbiased) model parameters. As compared to conventional two-compartment modeling, this feature constitutes a major advantage for tracer kinetic analysis of both preclinical and clinical DCE imaging studies. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery
2017-09-01
In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.
Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter
2015-02-01
Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity.
Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter
2015-01-01
Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes or slower oscillation phase codes, may resolve conflicting experimental observations on gamma phase coding. Our modeling results offer clear testable experimental predictions. We conclude that input-dependency of gamma frequencies could be essential rather than detrimental for meaningful gamma-mediated temporal organization of cortical activity. PMID:25679780
Pilot Evaluations of Runway Status Light System
NASA Technical Reports Server (NTRS)
Young, Steven D.; Wills, Robert W.; Smith, R. Marshall
1996-01-01
This study focuses on use of the Transport Systems Research Vehicle (TSRV) Simulator at the Langley Research Center to obtain pilot opinion and input on the Federal Aviation Administration's Runway Status Light System (RWSL) prior to installation in an operational airport environment. The RWSL has been designed to reduce the likelihood of runway incursions by visually alerting pilots when a runway is occupied. Demonstrations of the RWSL in the TSRV Simulator allowed pilots to evaluate the system in a realistic cockpit environment.
Leahy, P.P.
1982-01-01
The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)
Heavy Ion Current Transients in SiGe HBTs
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Reed, Robert A.; Vizkelethy, Gyorgy; McMorrow, Dale; Ferlet-Cavrois, Veronique; Baggio, Jacques; Paillet, Philipe; Duhanel, Olivier; Phillips, Stanley D.; Sutton, Akil K.;
2009-01-01
Time-resolved ion beam induced charge reveals heavy ion response of IBM 5AM SiGe HBT: a) Position correlation[ b) Unique response for different bias schemes; c) Similarities to TPA pulsed-laser data. Heavy ion broad-beam transients provide more realistic device response: a) Feedback using microbeam data; b) Overcome issues of LET and ion range with microbeam. Both micro- and broad-beam data sets yield valuable input for TCAD simulations. Uncover detailed mechanisms for SiGe HBTs and other devices fabricated on lightly-doped substrates.
Online monitoring of a photocatalytic reaction by real-time high resolution FlowNMR spectroscopy.
Hall, Andrew M R; Broomfield-Tagg, Rachael; Camilleri, Matthew; Carbery, David R; Codina, Anna; Whittaker, David T E; Coombes, Steven; Lowe, John P; Hintermair, Ulrich
2017-12-19
We demonstrate how FlowNMR spectroscopy can readily be applied to investigate photochemical reactions that require sustained input of light and air to yield mechanistic insight under realistic conditions. The Eosin Y mediated photo-oxidation of N-allylbenzylamine is shown to produce imines as primary reaction products from which undesired aldehydes form after longer reaction times. Facile variation of reaction conditions during the reaction in flow allows for probe experiments that give information about the mode of action of the photocatalyst.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
Shay, Christopher F.; Ferrante, Michele; Chapman, G. William; Hasselmo, Michael E.
2015-01-01
Rebound spiking properties of medial entorhinal cortex (mEC) stellate cells induced by inhibition may underlie their functional properties in awake behaving rats, including the temporal phase separation of distinct grid cells and differences in grid cell firing properties. We investigated rebound spiking properties using whole cell patch recording in entorhinal slices, holding cells near spiking threshold and delivering sinusoidal inputs, superimposed with realistic inhibitory synaptic inputs to test the capacity of cells to selectively respond to specific phases of inhibitory input. Stellate cells showed a specific phase range of hyperpolarizing inputs that elicited spiking, but non-stellate cells did not show phase specificity. In both cell types, the phase range of spiking output occurred between the peak and subsequent descending zero crossing of the sinusoid. The phases of inhibitory inputs that induced spikes shifted earlier as the baseline sinusoid frequency increased, while spiking output shifted to later phases. Increases in magnitude of the inhibitory inputs shifted the spiking output to earlier phases. Pharmacological blockade of h-current abolished the phase selectivity of hyperpolarizing inputs eliciting spikes. A network computational model using cells possessing similar rebound properties as found in vitro produces spatially periodic firing properties resembling grid cell firing when a simulated animal moves along a linear track. These results suggest that the ability of mEC stellate cells to fire rebound spikes in response to a specific range of phases of inhibition could support complex attractor dynamics that provide completion and separation to maintain spiking activity of specific grid cell populations. PMID:26385258
Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang
2011-10-01
This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2014-01-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
Desktop Application Program to Simulate Cargo-Air-Drop Tests
NASA Technical Reports Server (NTRS)
Cuthbert, Peter
2009-01-01
The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, B.; Wood, R.T.
1997-04-22
A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.
Automated method for the systematic interpretation of resonance peaks in spectrum data
Damiano, Brian; Wood, Richard T.
1997-01-01
A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.
Meter circuit for tuning RF amplifiers
NASA Technical Reports Server (NTRS)
Longthorne, J. E.
1973-01-01
Circuit computes and indicates efficiency of RF amplifier as inputs and other parameters are varied. Voltage drop across internal resistance of ammeter is amplified by operational amplifier and applied to one multiplier input. Other input is obtained through two resistors from positive terminal of power supply.
Ring rolling process simulation for microstructure optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Metal undergoes complicated microstructural evolution during Hot Ring Rolling (HRR), which determines the quality, mechanical properties and life of the ring formed. One of the principal microstructure properties which mostly influences the structural performances of forged components, is the value of the average grain size. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular velocity of driver roll) on microstructural and on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR has been developed in SFTC DEFORM V11, taking into account also microstructural development of the material used (the nickel superalloy Waspalloy). The Finite Element (FE) model has been used to formulate a proper optimization problem. The optimization procedure has been developed in order to find the combination of process parameters which allows to minimize the average grain size. The Response Surface Methodology (RSM) has been used to find the relationship between input and output parameters, by using the exact values of output parameters in the control points of a design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. Then, an optimization procedure based on Genetic Algorithms has been applied. At the end, the minimum value of average grain size with respect to the input parameters has been found.
VizieR Online Data Catalog: Planetary atmosphere radiative transport code (Garcia Munoz+ 2015)
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.; Mills, F. P.
2014-08-01
Files are: * readme.txt * Input files: INPUThazeL.txt, INPUTL13.txt, INPUT_L60.txt; they contain explanations to the input parameters. Copy INPUT_XXXX.txt into INPUT.dat to execute some of the examples described in the reference. * Files with scattering matrix properties: phFhazeL.txt, phFL13.txt, phF_L60.txt * Script for compilation in GFortran (myscript) (10 data files).
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P
2015-05-01
To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Emissions-critical charge cooling using an organic rankine cycle
Ernst, Timothy C.; Nelson, Christopher R.
2014-07-15
The disclosure provides a system including a Rankine power cycle cooling subsystem providing emissions-critical charge cooling of an input charge flow. The system includes a boiler fluidly coupled to the input charge flow, an energy conversion device fluidly coupled to the boiler, a condenser fluidly coupled to the energy conversion device, a pump fluidly coupled to the condenser and the boiler, an adjuster that adjusts at least one parameter of the Rankine power cycle subsystem to change a temperature of the input charge exiting the boiler, and a sensor adapted to sense a temperature characteristic of the vaporized input charge. The system includes a controller that can determine a target temperature of the input charge sufficient to meet or exceed predetermined target emissions and cause the adjuster to adjust at least one parameter of the Rankine power cycle to achieve the predetermined target emissions.
Studies on the use of helicopters for oil spill clearance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinelli, F.N.
A program of work was undertaken to assess the use of a commercially available underslung cropspraying bucket for spraying oil spill dispersants. The study consisted of land-based trials to measure relevant parameters of the spray and the effect on these parameters of spray height and dispersant viscosity. A sea trial was undertaken to observe the system under realistic conditions. (Copyright (c) Crown Copyright.)
Lin, Risa J; Jaeger, Dieter
2011-05-01
In previous studies we used the technique of dynamic clamp to study how temporal modulation of inhibitory and excitatory inputs control the frequency and precise timing of spikes in neurons of the deep cerebellar nuclei (DCN). Although this technique is now widely used, it is limited to interpreting conductance inputs as being location independent; i.e., all inputs that are biologically distributed across the dendritic tree are applied to the soma. We used computer simulations of a morphologically realistic model of DCN neurons to compare the effects of purely somatic vs. distributed dendritic inputs in this cell type. We applied the same conductance stimuli used in our published experiments to the model. To simulate variability in neuronal responses to repeated stimuli, we added a somatic white current noise to reproduce subthreshold fluctuations in the membrane potential. We were able to replicate our dynamic clamp results with respect to spike rates and spike precision for different patterns of background synaptic activity. We found only minor differences in the spike pattern generation between focal or distributed input in this cell type even when strong inhibitory or excitatory bursts were applied. However, the location dependence of dynamic clamp stimuli is likely to be different for each cell type examined, and the simulation approach developed in the present study will allow a careful assessment of location dependence in all cell types.
NASA Astrophysics Data System (ADS)
Kurz, C.; Mairani, A.; Parodi, K.
2012-08-01
Over the last decades, the application of proton and heavy-ion beams to external beam radiotherapy has rapidly increased. Due to the favourable lateral and depth dose profile, the superposition of narrow ion pencil beams may enable a highly conformal dose delivery to the tumour, with better sparing of the surrounding healthy tissue in comparison to conventional radiation therapy with photons. To fully exploit the promised clinical advantages of ion beams, an accurate planning of the patient treatments is required. The clinical treatment planning system (TPS) at the Heidelberg Ion-Beam Therapy Center (HIT) is based on a fast performing analytical algorithm for dose calculation, relying, among others, on laterally integrated depth dose distributions (DDDs) simulated with the FLUKA Monte Carlo (MC) code. Important input parameters of these simulations need to be derived from a comparison of the simulated DDDs with measurements. In this work, the first measurements of 16O ion DDDs at HIT are presented with a focus on the determined Bragg peak positions and the understanding of factors influencing the shape of the distributions. The measurements are compared to different simulation approaches aiming to reproduce the acquired data at best. A simplified geometrical model is first used to optimize important input parameters, not known a priori, in the simulations. This method is then compared to a more realistic, but also more time-consuming simulation approach better accounting for the experimental set-up and the measuring process. The results of this work contributed to a pre-clinical oxygen ion beam database, which is currently used by a research TPS for corresponding radio-biological cell experiments. A future extension to a clinical database used by the clinical TPS at HIT is foreseen. As a side effect, the performed investigations showed that the typical water equivalent calibration approach of experimental data acquired with water column systems leads to slight deviations between the experimentally determined and the real Bragg peak positions. For improved accuracy, the energy dependence of the stopping power, and herewith the water equivalent thickness, of the material downstream of the water tank should be considered in the analysis of measured data.
NASA Astrophysics Data System (ADS)
Graymer, R. W.; Simpson, R. W.
2014-12-01
Graymer and Simpson (2013, AGU Fall Meeting) showed that in a simple 2D multi-fault system (vertical, parallel, strike-slip faults bounding blocks without strong material property contrasts) slip rate on block-bounding faults can be reasonably estimated by the difference between the mean velocity of adjacent blocks if the ratio of the effective locking depth to the distance between the faults is 1/3 or less ("effective" locking depth is a synthetic parameter taking into account actual locking depth, fault creep, and material properties of the fault zone). To check the validity of that observation for a more complex 3D fault system and a realistic distribution of observation stations, we developed a synthetic suite of GPS velocities from a dislocation model, with station location and fault parameters based on the San Francisco Bay region. Initial results show that if the effective locking depth is set at the base of the seismogenic zone (about 12-15 km), about 1/2 the interfault distance, the resulting synthetic velocity observations, when clustered, do a poor job of returning the input fault slip rates. However, if the apparent locking depth is set at 1/2 the distance to the base of the seismogenic zone, or about 1/4 the interfault distance, the synthetic velocity field does a good job of returning the input slip rates except where the fault is in a strong restraining orientation relative to block motion or where block velocity is not well defined (for example west of the northern San Andreas Fault where there are no observations to the west in the ocean). The question remains as to where in the real world a low effective locking depth could usefully model fault behavior. Further tests are planned to define the conditions where average cluster-defined block velocities can be used to reliably estimate slip rates on block-bounding faults. These rates are an important ingredient in earthquake hazard estimation, and another tool to provide them should be useful.
Chemical Transport in a Fissured Rock: Verification of a Numerical Model
NASA Astrophysics Data System (ADS)
Rasmuson, A.; Narasimhan, T. N.; Neretnieks, I.
1982-10-01
Numerical models for simulating chemical transport in fissured rocks constitute powerful tools for evaluating the acceptability of geological nuclear waste repositories. Due to the very long-term, high toxicity of some nuclear waste products, the models are required to predict, in certain cases, the spatial and temporal distribution of chemical concentration less than 0.001% of the concentration released from the repository. Whether numerical models can provide such accuracies is a major question addressed in the present work. To this end we have verified a numerical model, TRUMP, which solves the advective diffusion equation in general three dimensions, with or without decay and source terms. The method is based on an integrated finite difference approach. The model was verified against known analytic solution of the one-dimensional advection-diffusion problem, as well as the problem of advection-diffusion in a system of parallel fractures separated by spherical particles. The studies show that as long as the magnitude of advectance is equal to or less than that of conductance for the closed surface bounding any volume element in the region (that is, numerical Peclet number <2), the numerical method can indeed match the analytic solution within errors of ±10-3% or less. The realistic input parameters used in the sample calculations suggest that such a range of Peclet numbers is indeed likely to characterize deep groundwater systems in granitic and ancient argillaceous systems. Thus TRUMP in its present form does provide a viable tool for use in nuclear waste evaluation studies. A sensitivity analysis based on the analytic solution suggests that the errors in prediction introduced due to uncertainties in input parameters are likely to be larger than the computational inaccuracies introduced by the numerical model. Currently, a disadvantage in the TRUMP model is that the iterative method of solving the set of simultaneous equations is rather slow when time constants vary widely over the flow region. Although the iterative solution may be very desirable for large three-dimensional problems in order to minimize computer storage, it seems desirable to use a direct solver technique in conjunction with the mixed explicit-implicit approach whenever possible. Work in this direction is in progress.
Master control data handling program uses automatic data input
NASA Technical Reports Server (NTRS)
Alliston, W.; Daniel, J.
1967-01-01
General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.
NASA Astrophysics Data System (ADS)
Adebiyi, S. J.; Adebesin, B. O.; Ikubanni, S. O.; Joshua, B. W.
2017-05-01
Empirical models of the ionosphere, such as the International Reference Ionosphere (IRI) model, play a vital role in evaluating the environmental effect on the operation of space-based communication and navigation technologies. The IRI extended to Plasmasphere (IRI-Plas) model can be adjusted with external data to update its electron density profile while still maintaining the overall integrity of the model representations. In this paper, the performance of the total electron content (TEC) assimilation option of the IRI-Plas at two equatorial stations, Jicamarca, Peru (geographic: 12°S, 77°W, dip angle 0.8°) and Cachoeira Paulista, Brazil (Geographic: 22.7°S, 45°W, dip angle -26°), is examined during quiet and disturbed conditions. TEC, F2 layer critical frequency (foF2), and peak height (hmF2) predicted when the model is operated without external input were used as a baseline in our model evaluation. Results indicate that TEC predicted by the assimilation option generally produced smaller estimation errors compared to the "no extra input" option during quiet and disturbed conditions. Generally, the error is smaller at the equatorial trough than near the crest for both quiet and disturbed days. With assimilation option, there is a substantial improvement of storm time estimations when compared with quiet time predictions. The improvement is, however, independent on storm's severity. Furthermore, the modeled foF2 and hmF2 are generally poor with TEC assimilation, particularly the hmF2 prediction, at the two locations during both quiet and disturbed conditions. Consequently, IRI-Plas model assimilated with TEC value only may not be sufficient where more realistic instantaneous values of peak parameters are required.
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
Mehl, Steffen W.; Hill, Mary C.
2006-01-01
This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.
Nonlinear Site Response Validation Studies Using KIK-net Strong Motion Data
NASA Astrophysics Data System (ADS)
Asimaki, D.; Shi, J.
2014-12-01
Earthquake simulations are nowadays producing realistic ground motion time-series in the range of engineering design applications. Of particular significance to engineers are simulations of near-field motions and large magnitude events, for which observations are scarce. With the engineering community slowly adopting the use of simulated ground motions, site response models need to be re-evaluated in terms of their capabilities and limitations to 'translate' the simulated time-series from rock surface output to structural analyses input. In this talk, we evaluate three one-dimensional site response models: linear viscoelastic, equivalent linear and nonlinear. We evaluate the performance of the models by comparing predictions to observations at 30 downhole stations of the Japanese network KIK-Net that have recorded several strong events, including the 2011 Tohoku earthquake. Velocity profiles are used as the only input to all models, while additional parameters such as quality factor, density and nonlinear dynamic soil properties are estimated from empirical correlations. We quantify the differences of ground surface predictions and observations in terms of both seismological and engineering intensity measures, including bias ratios of peak ground response and visual comparisons of elastic spectra, and inelastic to elastic deformation ratio for multiple ductility ratios. We observe that PGV/Vs,30 — as measure of strain— is a better predictor of site nonlinearity than PGA, and that incremental nonlinear analyses are necessary to produce reliable estimates of high-frequency ground motion components at soft sites. We finally discuss the implications of our findings on the parameterization of nonlinear amplification factors in GMPEs, and on the extensive use of equivalent linear analyses in probabilistic seismic hazard procedures.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
NASA Astrophysics Data System (ADS)
Chowdhury, S.; Sharma, A.
2005-12-01
Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.
Construction of Gridded Daily Weather Data and its Use in Central-European Agroclimatic Study
NASA Astrophysics Data System (ADS)
Dubrovsky, M.; Trnka, M.; Skalak, P.
2013-12-01
The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series, interpolated, and then modified according to the GCM- or RCM-based climate change scenarios. The present contribution, in which the parametric daily weather generator M&Rfi is linked to the high-resolution RCM output (ALADIN-Climate/CZ model) and GCM-based climate change scenarios, consists of two parts: The first part focuses on a methodology. Firstly, the gridded WG representing the baseline climate is created by merging information from observations and high resolution RCM outputs. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with RCM-simulated weather series vs. spatially scarcer observations. To represent the future climate, the WG parameters are modified according to the 'WG-friendly' climate change scenarios. These scenarios are defined in terms of changes in WG parameters and include - apart from changes in the means - changes in WG parameters, which represent the additional characteristics of the weather series (e.g. probability of wet day occurrence and lag-1 autocorrelation of daily mean temperature). The WG-friendly scenarios for the present experiment are based on comparison of future vs baseline surface weather series simulated by GCMs from a CMIP3 database. The second part will present results of climate change impact study based on an above methodology applied to Central Europe. The changes in selected climatic (focusing on the extreme precipitation and temperature characteristics) and agroclimatic (including number of days during vegetation season with heat and drought stresses) characteristics will be analysed. In discussing the results, the emphasis will be put on 'added value' of various aspects of above methodology (e.g. inclusion of changes in 'advanced' WG parameters into the climate change scenarios). Acknowledgements: The present experiment is made within the frame of projects WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR), ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), and VALUE (COST ES 1102 action).
Program for User-Friendly Management of Input and Output Data Sets
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard
2003-01-01
A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
NASA Astrophysics Data System (ADS)
vellaichamy, Lakshmanan; Paulraj, Sathiya
2018-02-01
The dissimilar welding of Incoloy 800HT and P91 steel using Gas Tungsten arc welding process (GTAW) This material is being used in the Nuclear Power Plant and Aerospace Industry based application because Incoloy 800HT possess good corrosion and oxidation resistance and P91 possess high temperature strength and creep resistance. This work discusses on multi-objective optimization using gray relational analysis (GRA) using 9CrMoV-N filler materials. The experiment conducted L9 orthogonal array. The input parameter are current, voltage, speed. The output response are Tensile strength, Hardness and Toughness. To optimize the input parameter and multiple output variable by using GRA. The optimal parameter is combination was determined as A2B1C1 so given input parameter welding current at 120 A, voltage at 16 V and welding speed at 0.94 mm/s. The output of the mechanical properties for best and least grey relational grade was validated by the metallurgical characteristics.
NASA Astrophysics Data System (ADS)
Lou, Yang; Zhou, Weimin; Matthews, Thomas P.; Appleton, Catherine M.; Anastasio, Mark A.
2017-04-01
Photoacoustic computed tomography (PACT) and ultrasound computed tomography (USCT) are emerging modalities for breast imaging. As in all emerging imaging technologies, computer-simulation studies play a critically important role in developing and optimizing the designs of hardware and image reconstruction methods for PACT and USCT. Using computer-simulations, the parameters of an imaging system can be systematically and comprehensively explored in a way that is generally not possible through experimentation. When conducting such studies, numerical phantoms are employed to represent the physical properties of the patient or object to-be-imaged that influence the measured image data. It is highly desirable to utilize numerical phantoms that are realistic, especially when task-based measures of image quality are to be utilized to guide system design. However, most reported computer-simulation studies of PACT and USCT breast imaging employ simple numerical phantoms that oversimplify the complex anatomical structures in the human female breast. We develop and implement a methodology for generating anatomically realistic numerical breast phantoms from clinical contrast-enhanced magnetic resonance imaging data. The phantoms will depict vascular structures and the volumetric distribution of different tissue types in the breast. By assigning optical and acoustic parameters to different tissue structures, both optical and acoustic breast phantoms will be established for use in PACT and USCT studies.
An evaluation of differences due to changing source directivity in room acoustic computer modeling
NASA Astrophysics Data System (ADS)
Vigeant, Michelle C.; Wang, Lily M.
2004-05-01
This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
A realist review of the partograph: when and how does it work for labour monitoring?
Bedwell, Carol; Levin, Karen; Pett, Celia; Lavender, Dame Tina
2017-01-13
The partograph (or partogram) is recommended by the World Health Organisation (WHO), for monitoring labour wellbeing and progress. Concerns about limitations in the way the partograph is used in the clinical context and the potential impact on its effectiveness have led to this realist systematic review of partograph use. This review aimed to answer two key questions, 1) What is it about the partograph that works (or does not work); for whom does it work; and in what circumstances? 2) What are the essential inputs required for the partograph to work? A comprehensive search strategy encompassed key databases; including papers of varying methodologies. Papers were selected for inclusion if the focus of the paper was the partograph and related to context, mechanism or outcome. Ninety five papers were included for data synthesis. Two authors completed data extraction and synthesis. The evidence synthesis relates the evidence to identified theories of health worker acceptability, health system support, effective referral systems, human resources and health worker competence, highlighting barriers and facilitators. This first comprehensive realist synthesis of the partograph, provides the international community of maternity clinicians with a picture of potential issues and solutions related to successful labour recording and management, which is also translatable to other monitoring approaches.
Development of a program theory for shared decision-making: a realist review protocol.
Groot, Gary; Waldron, Tamara; Carr, Tracey; McMullen, Linda; Bandura, Lori-Ann; Neufeld, Shelley-May; Duncan, Vicky
2017-06-17
The practicality of applying evidence to healthcare systems with the aim of implementing change is an ongoing challenge for practitioners, policy makers, and academics. Shared decision- making (SDM), a method of medical decision-making that allows a balanced relationship between patients, physicians, and other key players in the medical decision process, is purported to improve patient and system outcomes. Despite the oft-mentioned benefits, there are gaps in the current literature between theory and implementation that would benefit from a realist approach given the value of this methodology to analyze complex interventions. In this protocol, we outline a study that will explore: "In which situations, how, why, and for whom does SDM between patients and health care providers contribute to improved decision making?" A seven step iterative process will be described including preliminary theory development, establishment of a search strategy, selection and appraisal of literature, data extraction, analysis and synthesis of extracted results from literature, and formation of a revised program theory with the input of patients, physicians, nurse navigators, and policy makers from a stakeholder session. The goal of the realist review will be to identify and refine a program theory for SDM through the identification of mechanisms which shape the characteristics of when, how, and why SDM will, and will not, work. PROSPERO CRD42017062609.
Nutrient inputs into the Gulf of Finland: Trends and water protection targets
NASA Astrophysics Data System (ADS)
Knuuttila, Seppo; Räike, Antti; Ekholm, Petri; Kondratyev, Sergey
2017-07-01
The catchment area of the Gulf of Finland (GOF) is nearly 14 times larger than the sea area and is home to 12 million people. The GOF is thus heavily polluted by nutrients, and eutrophication is one of the major environmental concerns. The aim of this study was to estimate trends in the nutrient input and to evaluate whether current water protection targets (national, EU, HELCOM) will be achieved. We used both national and international (HELCOM) databases to evaluate nutrient inputs from the surrounding three countries (Estonia, Finland and Russia). The average nitrogen (N) input into the GOF was 112,000 t y- 1 for the period 2009-2013, with rivers responsible for 79%, direct point sources accounting for 10% and deposition for 11% of the input. Phosphorus (P) input was 4270 t y- 1, of which rivers were responsible for 88% and point sources for 12%. The largest proportions (61% for N and 73% for P) of the inputs came from Russia, despite the specific areal inputs (input divided by land area) being smaller than in Estonia and Finland. The changes in nutrient inputs into the GOF are largely due to the changes in Russian inputs, and in particular changes in the nutrient fluxes of the River Neva. The latest available flow-normalised data showed that N export decreased slightly from 1994 to 2010, while flow-normalised P export had clearly decreased by 2010. The P input ending up in the GOF as a whole has decreased significantly over the past 10 years as a result of the re-construction of wastewater treatment infrastructure in St Petersburg and following control of a P leak at the Phosphorit factory in 2012. This measure also explains the steep decrease in riverine P export during recent years. Further reduction of inputs to meet the ambitious nutrient reduction goals of HELCOM and of WFD seems to be a challenge for Finland and Estonia in particular. Russia appears to have already reached approximately 90% of the BSAP's reduction target, with fulfilment of the remainder of the P target appearing a fairly realistic aim as well.
NASA Astrophysics Data System (ADS)
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is crucial for quantification in most microprobe studies, the influence of stereotaxic positioning error was studied for several realistic experiments in favorable and unfavorable experimental situations (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; binding of 18F-MPPF to 5HT1A receptors in the dorsal raphe nucleus).
AIRCRAFT REACTOR CONTROL SYSTEM APPLICABLE TO TURBOJET AND TURBOPROP POWER PLANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorker, G.E.
1955-07-19
Control systems proposed for direct cycle nuclear powered aircraft commonly involve control of engine speed, nuclear energy input, and chcmical energy input. A system in which these parameters are controlled by controlling the total energy input, the ratio of nuclear and chemical energy input, and the engine speed is proposed. The system is equally applicable to turbojet or turboprop applications. (auth)
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
A Revised Thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM Version 3.4)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Johnson, D. L.; James, B. F.
1996-01-01
This report describes the newly-revised model thermosphere for the Mars Global Reference Atmospheric Model (Mars-GRAM, Version 3.4). It also provides descriptions of other changes made to the program since publication of the programmer's guide for Mars-GRAM Version 3.34. The original Mars-GRAM model thermosphere was based on the global-mean model of Stewart. The revised thermosphere is based largely on parameterizations derived from output data from the three-dimensional Mars Thermospheric Global Circulation Model (MTGCM). The new thermospheric model includes revised dependence on the 10.7 cm solar flux for the global means of exospheric temperature, temperature of the base of the thermosphere, and scale height for the thermospheric temperature variations, as well as revised dependence on orbital position for global mean height of the base of the thermosphere. Other features of the new thermospheric model are: (1) realistic variations of temperature and density with latitude and time of day, (2) more realistic wind magnitudes, based on improved estimates of horizontal pressure gradients, and (3) allowance for user-input adjustments to the model values for mean exospheric temperature and for height and temperature at the base of the thermosphere. Other new features of Mars-GRAM 3.4 include: (1) allowance for user-input values of climatic adjustment factors for temperature profiles from the surface to 75 km, and (2) a revised method for computing the sub-solar longitude position in the 'ORBIT' subroutine.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Effect of Heat Input on Geometry of Austenitic Stainless Steel Weld Bead on Low Carbon Steel
NASA Astrophysics Data System (ADS)
Saha, Manas Kumar; Hazra, Ritesh; Mondal, Ajit; Das, Santanu
2018-05-01
Among different weld cladding processes, gas metal arc welding (GMAW) cladding becomes a cost effective, user friendly, versatile method for protecting the surface of relatively lower grade structural steels from corrosion and/or erosion wear by depositing high grade stainless steels onto them. The quality of cladding largely depends upon the bead geometry of the weldment deposited. Weld bead geometry parameters, like bead width, reinforcement height, depth of penetration, and ratios like reinforcement form factor (RFF) and penetration shape factor (PSF) determine the quality of the weld bead geometry. Various process parameters of gas metal arc welding like heat input, current, voltage, arc travel speed, mode of metal transfer, etc. influence formation of bead geometry. In the current experimental investigation, austenite stainless steel (316) weld beads are formed on low alloy structural steel (E350) by GMAW using 100% CO2 as the shielding gas. Different combinations of current, voltage and arc travel speed are chosen so that heat input increases from 0.35 to 0.75 kJ/mm. Nine number of weld beads are deposited and replicated twice. The observations show that weld bead width increases linearly with increase in heat input, whereas reinforcement height and depth of penetration do not increase with increase in heat input. Regression analysis is done to establish the relationship between heat input and different geometrical parameters of weld bead. The regression models developed agrees well with the experimental data. Within the domain of the present experiment, it is observed that at higher heat input, the weld bead gets wider having little change in penetration and reinforcement; therefore, higher heat input may be recommended for austenitic stainless steel cladding on low alloy steel.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
NASA Astrophysics Data System (ADS)
Prescott, Aaron M.; Abel, Steven M.
2016-12-01
The rational design of network behavior is a central goal of synthetic biology. Here, we combine in silico evolution with nonlinear dimensionality reduction to redesign the responses of fixed-topology signaling networks and to characterize sets of kinetic parameters that underlie various input-output relations. We first consider the earliest part of the T cell receptor (TCR) signaling network and demonstrate that it can produce a variety of input-output relations (quantified as the level of TCR phosphorylation as a function of the characteristic TCR binding time). We utilize an evolutionary algorithm (EA) to identify sets of kinetic parameters that give rise to: (i) sigmoidal responses with the activation threshold varied over 6 orders of magnitude, (ii) a graded response, and (iii) an inverted response in which short TCR binding times lead to activation. We also consider a network with both positive and negative feedback and use the EA to evolve oscillatory responses with different periods in response to a change in input. For each targeted input-output relation, we conduct many independent runs of the EA and use nonlinear dimensionality reduction to embed the resulting data for each network in two dimensions. We then partition the results into groups and characterize constraints placed on the parameters by the different targeted response curves. Our approach provides a way (i) to guide the design of kinetic parameters of fixed-topology networks to generate novel input-output relations and (ii) to constrain ranges of biological parameters using experimental data. In the cases considered, the network topologies exhibit significant flexibility in generating alternative responses, with distinct patterns of kinetic rates emerging for different targeted responses.
Benefits of detailed models of muscle activation and mechanics
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.
Model parameters for representative wetland plant functional groups
USDA-ARS?s Scientific Manuscript database
Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and...
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
The effect of collagen fibril orientation on the biphasic mechanics of articular cartilage.
Meng, Qingen; An, Shuqiang; Damion, Robin A; Jin, Zhongmin; Wilcox, Ruth; Fisher, John; Jones, Alison
2017-01-01
The highly inhomogeneous distribution of collagen fibrils may have important effects on the biphasic mechanics of articular cartilage. However, the effect of the inhomogeneity of collagen fibrils has mainly been investigated using simplified three-layered models, which may have underestimated the effect of collagen fibrils by neglecting their realistic orientation. The aim of this study was to investigate the effect of the realistic orientation of collagen fibrils on the biphasic mechanics of articular cartilage. Five biphasic material models, each of which included a different level of complexity of fibril reinforcement, were solved using two different finite element software packages (Abaqus and FEBio). Model 1 considered the realistic orientation of fibrils, which was derived from diffusion tensor magnetic resonance images. The simplified three-layered orientation was used for Model 2. Models 3-5 were three control models. The realistic collagen orientations obtained in this study were consistent with the literature. Results from the two finite element implementations were in agreement for each of the conditions modelled. The comparison between the control models confirmed some functions of collagen fibrils. The comparison between Models 1 and 2 showed that the widely-used three-layered inhomogeneous model can produce similar fluid load support to the model including the realistic fibril orientation; however, an accurate prediction of the other mechanical parameters requires the inclusion of the realistic orientation of collagen fibrils. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Daneji, A.; Ali, M.; Pervaiz, S.
2018-04-01
Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.
Dura-Bernal, S.; Neymotin, S. A.; Kerr, C. C.; Sivagnanam, S.; Majumdar, A.; Francis, J. T.; Lytton, W. W.
2017-01-01
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics. PMID:29200477
Inter-Individual Variability in High-Throughput Risk ...
We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion
Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity
NASA Astrophysics Data System (ADS)
Li, Dunzhu; Gurnis, Michael; Stadler, Georg
2017-04-01
We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.
The effect of welding parameters on high-strength SMAW all-weld-metal. Part 1: AWS E11018-M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vercesi, J.; Surian, E.
Three AWS A5.5-81 all-weld-metal test assemblies were welded with an E110180-M electrode from a standard production batch, varying the welding parameters in such a way as to obtain three energy inputs: high heat input and high interpass temperature (hot), medium heat input and medium interpass temperature (medium) and low heat input and low interpass temperature (cold). Mechanical properties and metallographic studies were performed in the as-welded condition, and it was found that only the tensile properties obtained with the test specimen made with the intermediate energy input satisfied the AWS E11018-M requirements. With the cold specimen, the maximal yield strengthmore » was exceeded, and with the hot one, neither the yield nor the tensile minimum strengths were achieved. The elongation and the impact properties were high enough to fulfill the minimal requirements, but the best Charpy-V notch values were obtained with the intermediate energy input. Metallographic studies showed that as the energy input increased the percentage of the columnar zones decreased, the grain size became larger, and in the as-welded zone, there was a little increment of both acicular ferrite and ferrite with second phase, with a consequent decrease of primary ferrite. These results showed that this type of alloy is very sensitive to the welding parameters and that very precise instructions must be given to secure the desired tensile properties in the all-weld-metal test specimens and under actual working conditions.« less
Spatial eigenmodes and synchronous oscillation: co-incidence detection in simulated cerebral cortex.
Chapman, Clare L; Wright, James J; Bourke, Paul D
2002-07-01
Zero-lag synchronisation arises between points on the cerebral cortex receiving concurrent independent inputs; an observation generally ascribed to nonlinear mechanisms. Using simulations of cerebral cortex and Principal Component Analysis (PCA) we show patterns of zero-lag synchronisation (associated with empirically realistic spectral content) can arise from both linear and nonlinear mechanisms. For low levels of activation, we show the synchronous field is described by the eigenmodes of the resultant damped wave activity. The first and second spatial eigenmodes (which capture most of the signal variance) arise from the even and odd components of the independent input signals. The pattern of zero-lag synchronisation can be accounted for by the relative dominance of the first mode over the second, in the near-field of the inputs. The simulated cortical surface can act as a few millisecond response coincidence detector for concurrent, but uncorrelated, inputs. As cortical activation levels are increased, local damped oscillations in the gamma band undergo a transition to highly nonlinear undamped activity with 40 Hz dominant frequency. This is associated with "locking" between active sites and spatially segregated phase patterns. The damped wave synchronisation and the locked nonlinear oscillations may combine to permit fast representation of multiple patterns of activity within the same field of neurons.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones.
Sohn, Bong-Soo
2017-03-11
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.
Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones
Sohn, Bong-Soo
2017-01-01
This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing. PMID:28287487
Active Vibration Control for Helicopter Interior Noise Reduction Using Power Minimization
NASA Technical Reports Server (NTRS)
Mendoza, J.; Chevva, K.; Sun, F.; Blanc, A.; Kim, S. B.
2014-01-01
This report describes work performed by United Technologies Research Center (UTRC) for NASA Langley Research Center (LaRC) under Contract NNL11AA06C. The objective of this program is to develop technology to reduce helicopter interior noise resulting from multiple gear meshing frequencies. A novel active vibration control approach called Minimum Actuation Power (MAP) is developed. MAP is an optimal control strategy that minimizes the total input power into a structure by monitoring and varying the input power of controlling sources. MAP control was implemented without explicit knowledge of the phasing and magnitude of the excitation sources by driving the real part of the input power from the controlling sources to zero. It is shown that this occurs when the total mechanical input power from the excitation and controlling sources is a minimum. MAP theory is developed for multiple excitation sources with arbitrary relative phasing for single or multiple discrete frequencies and controlled by a single or multiple controlling sources. Simulations and experimental results demonstrate the feasibility of MAP for structural vibration reduction of a realistic rotorcraft interior structure. MAP control resulted in significant average global vibration reduction of a single frequency and multiple frequency excitations with one controlling actuator. Simulations also demonstrate the potential effectiveness of the observed vibration reductions on interior radiated noise.
Food choice as a key management strategy for functional gastrointestinal symptoms.
Gibson, Peter R; Shepherd, Susan J
2012-05-01
Recognition of food components that induce functional gut symptoms in patient's functional bowel disorders (FBD) has been challenging. Food directly or indirectly provides considerable afferent input into the enteric nervous system. There is an altered relationship between the afferent input and perception/efferent response in FBD. Defining the nature of food-related stimuli may provide a means of minimizing such an input and gut symptoms. Using this premise, reducing the intake of FODMAPs (fermentable oligo-, di-, and mono-saccharides and polyols)--poorly absorbed short-chain carbohydrates that, by virtue of their small molecular size and rapid fermentability, will distend the intestinal lumen with liquid and gas--improves symptoms in the majority of patients. Well-developed methodologies to deliver the diet via dietician-led education are available. Another abundant source of afferent input is natural and added food chemicals (such as salicylates, amines, and glutamates). Studies are needed to assess the efficacy of the low food chemical dietary approach. A recent placebo-controlled trial of FODMAP-poor gluten provided the first valid evidence that non-celiac gluten intolerance might actually exist, but its prevalence and underlying mechanisms require elucidation. Food choice via the low FODMAP and potentially other dietary strategies is now a realistic and efficacious therapeutic approach for functional gut symptoms.
NASA Technical Reports Server (NTRS)
Cross, P. L.
1994-01-01
Birefringent filters are often used as line-narrowing components in solid state lasers. The Birefringent Filter Model program generates a stand-alone model of a birefringent filter for use in designing and analyzing a birefringent filter. It was originally developed to aid in the design of solid state lasers to be used on aircraft or spacecraft to perform remote sensing of the atmosphere. The model is general enough to allow the user to address problems such as temperature stability requirements, manufacturing tolerances, and alignment tolerances. The input parameters for the program are divided into 7 groups: 1) general parameters which refer to all elements of the filter; 2) wavelength related parameters; 3) filter, coating and orientation parameters; 4) input ray parameters; 5) output device specifications; 6) component related parameters; and 7) transmission profile parameters. The program can analyze a birefringent filter with up to 12 different components, and can calculate the transmission and summary parameters for multiple passes as well as a single pass through the filter. The Jones matrix, which is calculated from the input parameters of Groups 1 through 4, is used to calculate the transmission. Output files containing the calculated transmission or the calculated Jones' matrix as a function of wavelength can be created. These output files can then be used as inputs for user written programs. For example, to plot the transmission or to calculate the eigen-transmittances and the corresponding eigen-polarizations for the Jones' matrix, write the appropriate data to a file. The Birefringent Filter Model is written in Microsoft FORTRAN 2.0. The program format is interactive. It was developed on an IBM PC XT equipped with an 8087 math coprocessor, and has a central memory requirement of approximately 154K. Since Microsoft FORTRAN 2.0 does not support complex arithmetic, matrix routines for addition, subtraction, and multiplication of complex, double precision variables are included. The Birefringent Filter Model was written in 1987.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed
NASA Astrophysics Data System (ADS)
Arif, N.; Danoedoro, P.; Hartono
2017-12-01
Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.
Chasin, Marshall; Russo, Frank A
2004-01-01
Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Femtosecond soliton source with fast and broad spectral tunability.
Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E
2009-03-15
We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.
Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.
2014-08-01
In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
FAST: Fitting and Assessment of Synthetic Templates
NASA Astrophysics Data System (ADS)
Kriek, Mariska; van Dokkum, Pieter G.; Labbé, Ivo; Franx, Marijn; Illingworth, Garth D.; Marchesini, Danilo; Quadri, Ryan F.; Aird, James; Coil, Alison L.; Georgakakis, Antonis
2018-03-01
FAST (Fitting and Assessment of Synthetic Templates) fits stellar population synthesis templates to broadband photometry and/or spectra. FAST is compatible with the photometric redshift code EAzY (ascl:1010.052) when fitting broadband photometry; it uses the photometric redshifts derived by EAzY, and the input files (for examply, photometric catalog and master filter file) are the same. FAST fits spectra in combination with broadband photometric data points or simultaneously fits two components, allowing for an AGN contribution in addition to the host galaxy light. Depending on the input parameters, FAST outputs the best-fit redshift, age, dust content, star formation timescale, metallicity, stellar mass, star formation rate (SFR), and their confidence intervals. Though some of FAST's functions overlap with those of HYPERZ (ascl:1108.010), it differs by fitting fluxes instead of magnitudes, allows the user to completely define the grid of input stellar population parameters and easily input photometric redshifts and their confidence intervals, and calculates calibrated confidence intervals for all parameters. Note that FAST is not a photometric redshift code, though it can be used as one.
Conditioning 3D object-based models to dense well data
NASA Astrophysics Data System (ADS)
Wang, Yimin C.; Pyrcz, Michael J.; Catuneanu, Octavian; Boisvert, Jeff B.
2018-06-01
Object-based stochastic simulation models are used to generate categorical variable models with a realistic representation of complicated reservoir heterogeneity. A limitation of object-based modeling is the difficulty of conditioning to dense data. One method to achieve data conditioning is to apply optimization techniques. Optimization algorithms can utilize an objective function measuring the conditioning level of each object while also considering the geological realism of the object. Here, an objective function is optimized with implicit filtering which considers constraints on object parameters. Thousands of objects conditioned to data are generated and stored in a database. A set of objects are selected with linear integer programming to generate the final realization and honor all well data, proportions and other desirable geological features. Although any parameterizable object can be considered, objects from fluvial reservoirs are used to illustrate the ability to simultaneously condition multiple types of geologic features. Channels, levees, crevasse splays and oxbow lakes are parameterized based on location, path, orientation and profile shapes. Functions mimicking natural river sinuosity are used for the centerline model. Channel stacking pattern constraints are also included to enhance the geological realism of object interactions. Spatial layout correlations between different types of objects are modeled. Three case studies demonstrate the flexibility of the proposed optimization-simulation method. These examples include multiple channels with high sinuosity, as well as fragmented channels affected by limited preservation. In all cases the proposed method reproduces input parameters for the object geometries and matches the dense well constraints. The proposed methodology expands the applicability of object-based simulation to complex and heterogeneous geological environments with dense sampling.
Neural field model to reconcile structure with function in primary visual cortex.
Rankin, James; Chavane, Frédéric
2017-10-01
Voltage-sensitive dye imaging experiments in primary visual cortex (V1) have shown that local, oriented visual stimuli elicit stable orientation-selective activation within the stimulus retinotopic footprint. The cortical activation dynamically extends far beyond the retinotopic footprint, but the peripheral spread stays non-selective-a surprising finding given a number of anatomo-functional studies showing the orientation specificity of long-range connections. Here we use a computational model to investigate this apparent discrepancy by studying the expected population response using known published anatomical constraints. The dynamics of input-driven localized states were simulated in a planar neural field model with multiple sub-populations encoding orientation. The realistic connectivity profile has parameters controlling the clustering of long-range connections and their orientation bias. We found substantial overlap between the anatomically relevant parameter range and a steep decay in orientation selective activation that is consistent with the imaging experiments. In this way our study reconciles the reported orientation bias of long-range connections with the functional expression of orientation selective neural activity. Our results demonstrate this sharp decay is contingent on three factors, that long-range connections are sufficiently diffuse, that the orientation bias of these connections is in an intermediate range (consistent with anatomy) and that excitation is sufficiently balanced by inhibition. Conversely, our modelling results predict that, for reduced inhibition strength, spurious orientation selective activation could be generated through long-range lateral connections. Furthermore, if the orientation bias of lateral connections is very strong, or if inhibition is particularly weak, the network operates close to an instability leading to unbounded cortical activation.
Numerical model of the circulation and dispersion in the east Adriatic coastal waters
NASA Astrophysics Data System (ADS)
Beg Paklar, Gordana; Dzoic, Tomislav; Koracin, Darko; Matijevic, Slavica; Grbec, Branka; Ivatek-Sahdan, Stjepan
2017-04-01
The Regional Ocean Modeling System (ROMS) was implemented to reproduce physical properties of the area around submarine outlet Stobrec in the middle Adriatic coastal area. ROMS model run was forced with realistic atmospheric fields obtained from meteorological model Aladin, climatological river discharges, tides and dynamics of the surrounding area imposed at the open boundaries. Atmospheric forcing included momentum, heat and water fluxes calculated interactively from the Aladin surface fields during ROMS model simulations. Simulated fields from the Adriatic and shelf scale models were used to prescribe the initial and open boundary conditions for fine resolution coastal domain. Model results were compared with available CTD measurements and discussed in the light of the climatological circulation and thermohaline properties of the middle Adriatic coastal area. Variability in the circulation is related to the prevailing atmospheric conditions, changes in the hydrological conditions and water mass exchange at the open boundaries. Basic features of the coastal circulation are well reproduced by the ROMS model, as well as temperatures and salinities which are within corresponding seasonal intervals, although with lower stratification than measured ones. In order to reproduce dispersion of the passive tracer the ROMS model was coupled with Lagrangian dispersion model. Multiyear monitoring of the physical, chemical and biological parameters around the sewage outlet was used to assess the quality of the dispersion model results. Among measured parameters, redox potential of the surface sediment layer was selected to be compared with model results as its negative values are direct consequence of increased organic matter input that can be attributed to the sewage system inflow.
Protocol - realist and meta-narrative evidence synthesis: Evolving Standards (RAMESES)
2011-01-01
Background There is growing interest in theory-driven, qualitative and mixed-method approaches to systematic review as an alternative to (or to extend and supplement) conventional Cochrane-style reviews. These approaches offer the potential to expand the knowledge base in policy-relevant areas - for example by explaining the success, failure or mixed fortunes of complex interventions. However, the quality of such reviews can be difficult to assess. This study aims to produce methodological guidance, publication standards and training resources for those seeking to use the realist and/or meta-narrative approach to systematic review. Methods/design We will: [a] collate and summarise existing literature on the principles of good practice in realist and meta-narrative systematic review; [b] consider the extent to which these principles have been followed by published and in-progress reviews, thereby identifying how rigour may be lost and how existing methods could be improved; [c] using an online Delphi method with an interdisciplinary panel of experts from academia and policy, produce a draft set of methodological steps and publication standards; [d] produce training materials with learning outcomes linked to these steps; [e] pilot these standards and training materials prospectively on real reviews-in-progress, capturing methodological and other challenges as they arise; [f] synthesise expert input, evidence review and real-time problem analysis into more definitive guidance and standards; [g] disseminate outputs to audiences in academia and policy. The outputs of the study will be threefold: 1. Quality standards and methodological guidance for realist and meta-narrative reviews for use by researchers, research sponsors, students and supervisors 2. A 'RAMESES' (Realist and Meta-review Evidence Synthesis: Evolving Standards) statement (comparable to CONSORT or PRISMA) of publication standards for such reviews, published in an open-access academic journal. 3. A training module for researchers, including learning outcomes, outline course materials and assessment criteria. Discussion Realist and meta-narrative review are relatively new approaches to systematic review whose overall place in the secondary research toolkit is not yet fully established. As with all secondary research methods, guidance on quality assurance and uniform reporting is an important step towards improving quality and consistency of studies. PMID:21843376
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks
Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram
2013-01-01
The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546
NASA Astrophysics Data System (ADS)
Park, DaeKil
2018-06-01
The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.
NASA Astrophysics Data System (ADS)
Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.
2011-12-01
The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference between SWOT observations and modeled WSE using a perturbed set of parameters. Different formulations of the objective function were used, especially to account for SWOT observation errors, as well as various sets of calibration parameters.
More physics in the laundromat
NASA Astrophysics Data System (ADS)
Denny, Mark
2010-12-01
The physics of a washing machine spin cycle is extended to include the spin-up and spin-down phases. We show that, for realistic parameters, an adiabatic approximation applies, and thus the familiar forced, damped harmonic oscillator analysis can be applied to these phases.
Evaluation of trade influence on economic growth rate by computational intelligence approach
NASA Astrophysics Data System (ADS)
Sokolov-Mladenović, Svetlana; Milovančević, Milos; Mladenović, Igor
2017-01-01
In this study was analyzed the influence of trade parameters on the economic growth forecasting accuracy. Computational intelligence method was used for the analyzing since the method can handle highly nonlinear data. It is known that the economic growth could be modeled based on the different trade parameters. In this study five input parameters were considered. These input parameters were: trade in services, exports of goods and services, imports of goods and services, trade and merchandise trade. All these parameters were calculated as added percentages in gross domestic product (GDP). The main goal was to select which parameters are the most impactful on the economic growth percentage. GDP was used as economic growth indicator. Results show that the imports of goods and services has the highest influence on the economic growth forecasting accuracy.
Using voice input and audio feedback to enhance the reality of a virtual experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miner, N.E.
1994-04-01
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less
Comparison of results of an obstacle resolving microscale model with wind tunnel data
NASA Astrophysics Data System (ADS)
Grawe, David; Schlünzen, K. Heinke; Pascheke, Frauke
2013-11-01
The microscale transport and stream model MITRAS has been improved and a new technique has been implemented to improve numerical stability for complex obstacle configurations. Results of the updated version have been compared with wind tunnel data using an evaluation method that has been established for simple obstacle configurations. MITRAS is a part of the M-SYS model system for the assessment of ambient air quality. A comparison of model results for the flow field against quality ensured wind tunnel data has been carried out for both idealised and realistic test cases. Results of the comparison show a very good agreement of the wind field for most test cases and identify areas of possible improvement of the model. The evaluated MITRAS results can be used as input data for the M-SYS microscale chemistry model MICTM. This paper describes how such a comparison can be carried out for simple as well as realistic obstacle configurations and what difficulties arise.
Simulation of Combustion Systems with Realistic g-Jitter
NASA Technical Reports Server (NTRS)
Mell, W. E.; McGrattan, K. B.; Nakamura, Y.; Baum, H. R.
2001-01-01
A number of facilities are available for microgravity combustion experiments: aircraft, drop towers, sounding rockets, the space shuttle, and, in the future, the International Space Station (ISS). Acceleration disturbances or g-jitter about the background level of reduced gravity exist in all these microgravity facilities. While g-jitter is routinely measured, a quantitative comparison of the quality of g-jitter among the different microgravity facilities, in terms of its affects on combustion experiments, has not been compiled. Low frequency g-jitter (< 1 Hz) has been repeatedly observed to disturb a number of combustion systems. Guidelines regarding tolerable levels of acceleration disturbances for combustion experiments have been developed for use in the design of ISS experiments. The validity of these guidelines, however, remains unknown. In this project a transient, 3-D numerical model is under development to simulate the effects of realistic g-jitter on a number of combustion systems. The measured acceleration vector or some representation of it can be used as input to the simulation.
Geostatistical Borehole Image-Based Mapping of Karst-Carbonate Aquifer Pores.
Sukop, Michael C; Cunningham, Kevin J
2016-03-01
Quantification of the character and spatial distribution of porosity in carbonate aquifers is important as input into computer models used in the calculation of intrinsic permeability and for next-generation, high-resolution groundwater flow simulations. Digital, optical, borehole-wall image data from three closely spaced boreholes in the karst-carbonate Biscayne aquifer in southeastern Florida are used in geostatistical experiments to assess the capabilities of various methods to create realistic two-dimensional models of vuggy megaporosity and matrix-porosity distribution in the limestone that composes the aquifer. When the borehole image data alone were used as the model training image, multiple-point geostatistics failed to detect the known spatial autocorrelation of vuggy megaporosity and matrix porosity among the three boreholes, which were only 10 m apart. Variogram analysis and subsequent Gaussian simulation produced results that showed a realistic conceptualization of horizontal continuity of strata dominated by vuggy megaporosity and matrix porosity among the three boreholes. © 2015, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
Realistic and efficient 2D crack simulation
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek
2010-04-01
Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces
NASA Astrophysics Data System (ADS)
Rinker, Jennifer M.
2016-09-01
This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
The Wind Integration National Dataset (WIND) toolkit (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caroline Draxl: NREL
2014-01-01
Regional wind integration studies require detailed wind power output data at many locations to perform simulations of how the power system will operate under high penetration scenarios. The wind datasets that serve as inputs into the study must realistically reflect the ramping characteristics, spatial and temporal correlations, and capacity factors of the simulated wind plants, as well as being time synchronized with available load profiles.As described in this presentation, the WIND Toolkit fulfills these requirements by providing a state-of-the-art national (US) wind resource, power production and forecast dataset.
Independent Living for the Elderly at Home: What is Required?
Walker, Barbara Prime
1986-01-01
Home care is vital to healthy aging, since living autonomously in the community is the primary interest of the elderly. Breaking the cycle which locks many frail and disabled individuals into a position of dependency requires shared input from a variety of resources in society. Success depends on an accurate assessment of functional ability, knowledge of community support services, and the willingness of health professionals to work co-operatively with others to provide a balance of assistance which is realistic for the local community. PMID:20469462
Direct Inverse Control using an Artificial Neural Network for the Autonomous Hover of a Helicopter
2014-10-05
that if r(t) is the command to hover; the output y(t) will simply track the input command and hold a hover for the helicopter. III. THE X- 4P ...being used to test the DIC techniques to simulate realistic flight conditions. The X- 4P was instrumented with a number of sensors and a data recording...research [1] for system identification. The X- 4P SISO transfer functions were developed by CIFER in order to compare results against the DIC control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Wei; Chen, Gaoqiang; Chen, Jian
Reduced-activation ferritic/martensitic (RAFM) steels are an important class of structural materials for fusion reactor internals developed in recent years because of their improved irradiation resistance. However, they can suffer from welding induced property degradations. In this paper, a solid phase joining technology friction stir welding (FSW) was adopted to join a RAFM steel Eurofer 97 and different FSW parameters/heat input were chosen to produce welds. FSW response parameters, joint microstructures and microhardness were investigated to reveal relationships among welding heat input, weld structure characterization and mechanical properties. In general, FSW heat input results in high hardness inside the stir zonemore » mostly due to a martensitic transformation. It is possible to produce friction stir welds similar to but not with exactly the same base metal hardness when using low power input because of other hardening mechanisms. Further, post weld heat treatment (PWHT) is a very effective way to reduce FSW stir zone hardness values.« less
NASA Astrophysics Data System (ADS)
Haller, Julian; Wilkens, Volker
2012-11-01
For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.