Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Femtosecond soliton source with fast and broad spectral tunability.
Masip, Martin E; Rieznik, A A; König, Pablo G; Grosz, Diego F; Bragas, Andrea V; Martinez, Oscar E
2009-03-15
We present a complete set of measurements and numerical simulations of a femtosecond soliton source with fast and broad spectral tunability and nearly constant pulse width and average power. Solitons generated in a photonic crystal fiber, at the low-power coupling regime, can be tuned in a broad range of wavelengths, from 850 to 1200 nm using the input power as the control parameter. These solitons keep almost constant time duration (approximately 40 fs) and spectral widths (approximately 20 nm) over the entire measured spectra regardless of input power. Our numerical simulations agree well with measurements and predict a wide working wavelength range and robustness to input parameters.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
The input variables for a numerical model of reactive solute transport in groundwater include both transport parameters, such as hydraulic conductivity and infiltration, and reaction parameters that describe the important chemical and biological processes in the system. These pa...
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Covey, Curt; Lucas, Donald D.; Tannahill, John; ...
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
Buckling analysis of SMA bonded sandwich structure – using FEM
NASA Astrophysics Data System (ADS)
Katariya, Pankaj V.; Das, Arijit; Panda, Subrata K.
2018-03-01
Thermal buckling strength of smart sandwich composite structure (bonded with shape memory alloy; SMA) examined numerically via a higher-order finite element model in association with marching technique. The excess geometrical distortion of the structure under the elevated environment modeled through Green’s strain function whereas the material nonlinearity counted with the help of marching method. The system responses are computed numerically by solving the generalized eigenvalue equations via a customized MATLAB code. The comprehensive behaviour of the current finite element solutions (minimum buckling load parameter) is established by solving the adequate number of numerical examples including the given input parameter. The current numerical model is extended further to check the influence of various structural parameter of the sandwich panel on the buckling temperature including the SMA effect and reported in details.
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from 51V to 239Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Oblozinsky, P.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capote, R.; Herman, M.; Capote,R.
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released inmore » January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains phenomenological parameterizations based on the modified Fermi gas and superfluid models and microscopic calculations which are based on a realistic microscopic single-particle level scheme. Partial level densities formulae are also recommended. All tabulated total level densities are consistent with both the recommended average neutron resonance parameters and discrete levels. GAMMA contains parameters that quantify giant resonances, experimental gamma-ray strength functions and methods for calculating gamma emission in statistical model codes. The experimental GDR parameters are represented by Lorentzian fits to the photo-absorption cross sections for 102 nuclides ranging from {sup 51}V to {sup 239}Pu. FISSION includes global prescriptions for fission barriers and nuclear level densities at fission saddle points based on microscopic HFB calculations constrained by experimental fission cross sections.« less
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic
Guillas, S.; Georgiopoulou, A.; Dias, F.
2017-01-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339
Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.
Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F
2017-04-01
Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Parameterizing by the Number of Numbers
NASA Astrophysics Data System (ADS)
Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.
The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
A Numerical Study on Microwave Coagulation Therapy
2013-01-01
hepatocellular carcinoma (small size liver tumor). Through extensive numerical simulations, we reveal the mathematical relationships between some critical parameters in the therapy, including input power, frequency, temperature, and regions of impact. It is shown that these relationships can be approximated using simple polynomial functions. Compared to solutions of partial differential equations, these functions are significantly easier to compute and simpler to analyze for engineering design and clinical
NASA Technical Reports Server (NTRS)
Reichert, R, S.; Biringen, S.; Howard, J. E.
1999-01-01
LINER is a system of Fortran 77 codes which performs a 2D analysis of acoustic wave propagation and noise suppression in a rectangular channel with a continuous liner at the top wall. This new implementation is designed to streamline the usage of the several codes making up LINER, resulting in a useful design tool. Major input parameters are placed in two main data files, input.inc and nurn.prm. Output data appear in the form of ASCII files as well as a choice of GNUPLOT graphs. Section 2 briefly describes the physical model. Section 3 discusses the numerical methods; Section 4 gives a detailed account of program usage, including input formats and graphical options. A sample run is also provided. Finally, Section 5 briefly describes the individual program files.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
Blind identification of the kinetic parameters in three-compartment models
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri Y.; Di Bella, Edward V. R.
2004-03-01
Quantified knowledge of tissue kinetic parameters in the regions of the brain and other organs can offer information useful in clinical and research applications. Dynamic medical imaging with injection of radioactive or paramagnetic tracer can be used for this measurement. The kinetics of some widely used tracers such as [18F]2-fluoro-2-deoxy-D-glucose can be described by a three-compartment physiological model. The kinetic parameters of the tissue can be estimated from dynamically acquired images. Feasibility of estimation by blind identification, which does not require knowledge of the blood input, is considered analytically and numerically in this work for the three-compartment type of tissue response. The non-uniqueness of the two-region case for blind identification of kinetic parameters in three-compartment model is shown; at least three regions are needed for the blind identification to be unique. Numerical results for the accuracy of these blind identification methods in different conditions were considered. Both a separable variables least-squares (SLS) approach and an eigenvector-based algorithm for multichannel blind deconvolution approach were used. The latter showed poor accuracy. Modifications for non-uniform time sampling were also developed. Also, another method which uses a model for the blood input was compared. Results for the macroparameter K, which reflects the metabolic rate of glucose usage, using three regions with noise showed comparable accuracy for the separable variables least squares method and for the input model-based method, and slightly worse accuracy for SLS with the non-uniform sampling modification.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Study of eigenfrequencies with the help of Prony's method
NASA Astrophysics Data System (ADS)
Drobakhin, O. O.; Olevskyi, O. V.; Olevskyi, V. I.
2017-10-01
Eigenfrequencies can be crucial in the design of a construction. They define many parameters that determine limit parameters of the structure. Exceeding these values can lead to the structural failure of an object. It is especially important in the design of structures which support heavy equipment or are subjected to the forces of airflow. One of the most effective ways to acquire the frequencies' values is a computer-based numerical simulation. The existing methods do not allow to acquire the whole range of needed parameters. It is well known that Prony's method, is highly effective for the investigation of dynamic processes. Thus, it is rational to adapt Prony's method for such investigation. The Prony method has advantage in comparison with other numerical schemes because it provides the possibility to process not only the results of numerical simulation, but also real experimental data. The research was carried out for a computer model of a steel plate. The input data was obtained by using the Dassault Systems SolidWorks computer package with the Simulation add-on. We investigated the acquired input data with the help of Prony's method. The result of the numerical experiment shows that Prony's method can be used to investigate the mechanical eigenfrequencies with good accuracy. The output of Prony's method not only contains the information about values of frequencies themselves, but also contains data regarding the amplitudes, initial phases and decaying factors of any given mode of oscillation, which can also be used in engineering.
NASA Technical Reports Server (NTRS)
Brendley, K.; Chato, J. C.
1982-01-01
The parameters of the efflux from a helium dewar in space were numerically calculated. The flow was modeled as a one dimensional compressible ideal gas with variable properties. The primary boundary conditions are flow with friction and flow with heat transfer and friction. Two PASCAL programs were developed to calculate the efflux parameters: EFFLUZD and EFFLUXM. EFFLUXD calculates the minimum mass flow for the given shield temperatures and shield heat inputs. It then calculates the pipe lengths, diameter, and fluid parameters which satisfy all boundary conditions. Since the diameter returned by EFFLUXD is only rarely of nominal size, EFFLUXM calculates the mass flow and shield heat exchange for given pipe lengths, diameter, and shield temperatures.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepke, Scott M.
In this note, the laser focal plane intensity pro le for a beam modeled using the 3D ray trace package in HYDRA is determined. First, the analytical model is developed followed by a practical numerical model for evaluating the resulting computationally intensive normalization factor for all possible input parameters.
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
Numerical simulation of a sphere moving down an incline with identical spheres placed equally apart
Ling, Chi-Hai; Jan, Chyan-Deng; Chen, Cheng-lung; Shen, Hsieh Wen
1992-01-01
This paper describes a numerical study of an elastic sphere moving down an incline with a string of identical spheres placed equally apart. Two momentum equations and a moment equation formulated for the moving sphere are solved numerically for the instantaneous velocity of the moving sphere on an incline with different angles of inclination. Input parameters for numerical simulation include the properties of the sphere (the radius, density, Poison's ratio, and Young's Modulus of elasticity), the coefficient of friction between the spheres, and a damping coefficient of the spheres during collision.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
NONLINEAR AND FIBER OPTICS: Self-similar solution obtained by self-focusing of annular laser beams
NASA Astrophysics Data System (ADS)
Azimov, B. S.; Platonenko, Viktor T.; Sagatov, M. M.
1991-03-01
A numerical modeling is reported of steady-state self-focusing of an annular beam with thin "walls." An approximate similar solution is found to describe well the relationships observed in the numerical experiment for a special selection of the input parameters of the beam. This solution is used to estimate the focal length. Such self-similar self-focusing is shown to affect the whole power of the beam.
NASA Astrophysics Data System (ADS)
Bella, P.; Buček, P.; Ridzoň, M.; Mojžiš, M.; Parilák, L.'
2017-02-01
Production of multi-rifled seamless steel tubes is quite a new technology in Železiarne Podbrezová. Therefore, a lot of technological questions emerges (process technology, input feedstock dimensions, material flow during drawing, etc.) Pilot experiments to fine tune the process cost a lot of time and energy. For this, numerical simulation would be an alternative solution for achieving optimal parameters in production technology. This would reduce the number of experiments needed, lowering the overall costs of development. However, to claim the numerical results to be relevant it is necessary to verify them against the actual plant trials. Searching for optimal input feedstock dimension for drawing of multi-rifled tube with dimensions Ø28.6 mm × 6.3 mm is what makes the main topic of this paper. As a secondary task, effective position of the plug - die couple has been solved via numerical simulation. Comparing the calculated results with actual numbers from plant trials a good agreement was observed.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Evaluation of the MV (CAPON) Coherent Doppler Lidar Velocity Estimator
NASA Technical Reports Server (NTRS)
Lottman, B.; Frehlich, R.
1997-01-01
The performance of the CAPON velocity estimator for coherent Doppler lidar is determined for typical space-based and ground-based parameter regimes. Optimal input parameters for the algorithm were determined for each regime. For weak signals, performance is described by the standard deviation of the good estimates and the fraction of outliers. For strong signals, the fraction of outliers is zero. Numerical effort was also determined.
NASA Astrophysics Data System (ADS)
Mangal, S. K.; Sharma, Vivek
2018-02-01
Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.
Behavioral Implications of Piezoelectric Stack Actuators for Control of Micromanipulation
NASA Technical Reports Server (NTRS)
Goldfarb, Michael; Celanovic, Nikola
1996-01-01
A lumped-parameter model of a piezoelectric stack actuator has been developed to describe actuator behavior for purposes of control system analysis and design, and in particular for microrobotic applications requiring accurate position and/or force control. In addition to describing the input-output dynamic behavior, the proposed model explains aspects of non-intuitive behavioral phenomena evinced by piezoelectric actuators, such as the input-output rate-independent hysteresis and the change in mechanical stiffness that results from altering electrical load. The authors incorporate a generalized Maxwell resistive capacitor as a lumped-parameter causal representation of rate-independent hysteresis. Model formulation is validated by comparing results of numerical simulations to experimental data.
NASA Astrophysics Data System (ADS)
Yang, Liu; Huang, Jun; Yi, Mingxu; Zhang, Chaopu; Xiao, Qian
2017-11-01
A numerical study of a high efficiency propeller in the aerodynamic noise generation is carried out. Based on RANS, three-dimensional numerical simulation is performed to obtain the aerodynamic performance of the propeller. The result of the aerodynamic analysis is given as input of the acoustic calculation. The sound is calculated using the Farassat 1A, which is derived from Ffowcs Williams-Hawkings equation, and compared with the data of wind tunnel. The propeller is modified for noise reduction by changing its geometrical parameters such as diameter, chord width and pitch angle. The trend of variation between aerodynamic analysis data and acoustic calculation result are compared and discussed for different modification tasks. Meaningful conclusions are drawn on the noise reduction of propeller.
Enhancement of CFD validation exercise along the roof profile of a low-rise building
NASA Astrophysics Data System (ADS)
Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.
2018-04-01
The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
NASA Astrophysics Data System (ADS)
Hussain, Kamal; Pratap Singh, Satya; Kumar Datta, Prasanta
2013-11-01
A numerical investigation is presented to show the dependence of patterning effect (PE) of an amplified signal in a bulk semiconductor optical amplifier (SOA) and an optical bandpass filter based amplifier on various input signal and filter parameters considering both the cases of including and excluding intraband effects in the SOA model. The simulation shows that the variation of PE with input energy has a characteristic nature which is similar for both the cases. However the variation of PE with pulse width is quite different for the two cases, PE being independent of the pulse width when intraband effects are neglected in the model. We find a simple relationship between the PE and the signal pulse width. Using a simple treatment we study the effect of the amplified spontaneous emission (ASE) on PE and find that the ASE has almost no effect on the PE in the range of energy considered here. The optimum filter parameters are determined to obtain an acceptable extinction ratio greater than 10 dB and a PE less than 1 dB for the amplified signal over a wide range of input signal energy and bit-rate.
NASA Astrophysics Data System (ADS)
Daneji, A.; Ali, M.; Pervaiz, S.
2018-04-01
Friction stir welding (FSW) is a form of solid state welding process for joining metals, alloys, and selective composites. Over the years, FSW development has provided an improved way of producing welding joints, and consequently got accepted in numerous industries such as aerospace, automotive, rail and marine etc. In FSW, the base metal properties control the material’s plastic flow under the influence of a rotating tool whereas, the process and tool parameters play a vital role in the quality of weld. In the current investigation, an array of square butt joints of 6061 Aluminum alloy was to be welded under varying FSW process and tool geometry related parameters, after which the resulting weld was evaluated for the corresponding mechanical properties and welding defects. The study incorporates FSW process and tool parameters such as welding speed, pin height and pin thread pitch as input parameters. However, the weld quality related defects and mechanical properties were treated as output parameters. The experimentation paves way to investigate the correlation between the inputs and the outputs. The correlation between inputs and outputs were used as tool to predict the optimized FSW process and tool parameters for a desired weld output of the base metals under investigation. The study also provides reflection on the effect of said parameters on a welding defect such as wormhole.
NASA Astrophysics Data System (ADS)
Gaik Tay, Kim; Cheong, Tau Han; Foong Lee, Ming; Kek, Sie Long; Abdul-Kahar, Rosmila
2017-08-01
In the previous work on Euler’s spreadsheet calculator for solving an ordinary differential equation, the Visual Basic for Application (VBA) programming was used, however, a graphical user interface was not developed to capture users input. This weakness may make users confuse on the input and output since those input and output are displayed in the same worksheet. Besides, the existing Euler’s spreadsheet calculator is not interactive as there is no prompt message if there is a mistake in inputting the parameters. On top of that, there are no users’ instructions to guide users to input the derivative function. Hence, in this paper, we improved previous limitations by developing a user-friendly and interactive graphical user interface. This improvement is aimed to capture users’ input with users’ instructions and interactive prompt error messages by using VBA programming. This Euler’s graphical user interface spreadsheet calculator is not acted as a black box as users can click on any cells in the worksheet to see the formula used to implement the numerical scheme. In this way, it could enhance self-learning and life-long learning in implementing the numerical scheme in a spreadsheet and later in any programming language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
N.D. Francis
The objective of this calculation is to develop a time dependent in-drift effective thermal conductivity parameter that will approximate heat conduction, thermal radiation, and natural convection heat transfer using a single mode of heat transfer (heat conduction). In order to reduce the physical and numerical complexity of the heat transfer processes that occur (and must be modeled) as a result of the emplacement of heat generating wastes, a single parameter will be developed that approximates all forms of heat transfer from the waste package surface to the drift wall (or from one surface exchanging heat with another). Subsequently, with thismore » single parameter, one heat transfer mechanism (e.g., conduction heat transfer) can be used in the models. The resulting parameter is to be used as input in the drift-scale process-level models applied in total system performance assessments for the site recommendation (TSPA-SR). The format of this parameter will be a time-dependent table for direct input into the thermal-hydrologic (TH) and the thermal-hydrologic-chemical (THC) models.« less
Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Rosen, I. G.
1987-01-01
The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.
Influences of system uncertainties on the numerical transfer path analysis of engine systems
NASA Astrophysics Data System (ADS)
Acri, A.; Nijman, E.; Acri, A.; Offner, G.
2017-10-01
Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
PAR -- Interface to the ADAM Parameter System
NASA Astrophysics Data System (ADS)
Currie, Malcolm J.; Chipperfield, Alan J.
PAR is a library of Fortran subroutines that provides convenient mechanisms for applications to exchange information with the outside world, through input-output channels called parameters. Parameters enable a user to control an application's behaviour. PAR supports numeric, character, and logical parameters, and is currently implemented only on top of the ADAM parameter system. The PAR library permits parameter values to be obtained, without or with a variety of constraints. Results may be put into parameters to be passed onto other applications. Other facilities include setting a prompt string, and suggested defaults. This document also introduces a preliminary C interface for the PAR library -- this may be subject to change in the light of experience.
Studies of HZE particle interactions and transport for space radiation protection purposes
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn
1987-01-01
The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.
Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD
NASA Astrophysics Data System (ADS)
Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.
2018-05-01
In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.
Fast online generalized multiscale finite element method using constraint energy minimization
NASA Astrophysics Data System (ADS)
Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat
2018-02-01
Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.
Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier
2018-06-23
Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.
Processor design optimization methodology for synthetic vision systems
NASA Astrophysics Data System (ADS)
Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.
1997-06-01
Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.
A dimension-wise analysis method for the structural-acoustic system with interval parameters
NASA Astrophysics Data System (ADS)
Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong
2017-04-01
The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.
Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data.
Florescu, Dorian; Coca, Daniel
2018-03-01
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.
Combined input shaping and feedback control for double-pendulum systems
NASA Astrophysics Data System (ADS)
Mar, Robert; Goyal, Anurag; Nguyen, Vinh; Yang, Tianle; Singhose, William
2017-02-01
A control system combining input shaping and feedback is developed for double-pendulum systems subjected to external disturbances. The proposed control method achieves fast point-to-point response similar to open-loop input-shaping control. It also minimizes transient deflections during the motion of the system, and disturbance-induced residual swing using the feedback control. Effects of parameter variations such as the mass ratio of the double pendulum, the suspension length ratio, and the move distance were studied via numerical simulation. The most important results were also verified with experiments on a small-scale crane. The controller effectively suppresses the disturbances and is robust to modelling uncertainties and task variations.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2015-01-01
The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.
Gas Atomization of Molten Metal: Part I. Numerical Modeling Conception
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leon, Genaro Perez-de; Lamberti, Vincent E.; Seals, Roland D.
This numerical analysis study entails creating and assessing a model that is capable of simulating molten metal droplets and the production of metal powder during the Gas Atomization (GA) method. The essential goal of this research aims to gather more information on simulating the process of creating metal powder. The model structure and perspective was built through the application of governing equations and aspects that utilized factors such as gas dynamics, droplet dynamics, energy balance, heat transfer, fluid mechanics and thermodynamics that were proposed from previous studies. The model is very simple and can be broken down into having amore » set of inputs to produce outputs. The inputs are the processing parameters such as the initial temperature of the metal alloy, the gas pressure and the size of the droplets. Additional inputs include the selection of the metal alloy and the atomization gas and factoring in their properties. The outputs can be designated by the velocity and thermal profiles of the droplet and gas. These profiles illustrate the speed of both as well as the rate of temperature change or cooling rate of the droplets. Here, the main focus is the temperature change and finding the right parameters to ensure that the metal powder is efficiently produced. Once the model was conceptualized and finalized, it was employed to verify the results of other previous studies.« less
Gas Atomization of Molten Metal: Part I. Numerical Modeling Conception
Leon, Genaro Perez-de; Lamberti, Vincent E.; Seals, Roland D.; ...
2016-02-01
This numerical analysis study entails creating and assessing a model that is capable of simulating molten metal droplets and the production of metal powder during the Gas Atomization (GA) method. The essential goal of this research aims to gather more information on simulating the process of creating metal powder. The model structure and perspective was built through the application of governing equations and aspects that utilized factors such as gas dynamics, droplet dynamics, energy balance, heat transfer, fluid mechanics and thermodynamics that were proposed from previous studies. The model is very simple and can be broken down into having amore » set of inputs to produce outputs. The inputs are the processing parameters such as the initial temperature of the metal alloy, the gas pressure and the size of the droplets. Additional inputs include the selection of the metal alloy and the atomization gas and factoring in their properties. The outputs can be designated by the velocity and thermal profiles of the droplet and gas. These profiles illustrate the speed of both as well as the rate of temperature change or cooling rate of the droplets. Here, the main focus is the temperature change and finding the right parameters to ensure that the metal powder is efficiently produced. Once the model was conceptualized and finalized, it was employed to verify the results of other previous studies.« less
Simulation tests of the optimization method of Hopfield and Tank using neural networks
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
1988-01-01
The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2018-06-01
Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.
Imposition of physical parameters in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Mai-Duy, N.; Phan-Thien, N.; Tran-Cong, T.
2017-12-01
In the mesoscale simulations by the dissipative particle dynamics (DPD), the motion of a fluid is modelled by a set of particles interacting in a pairwise manner, and it has been shown to be governed by the Navier-Stokes equation, with its physical properties, such as viscosity, Schmidt number, isothermal compressibility, relaxation and inertia time scales, in fact its whole rheology resulted from the choice of the DPD model parameters. In this work, we will explore the response of a DPD fluid with respect to its parameter space, where the model input parameters can be chosen in advance so that (i) the ratio between the relaxation and inertia time scales is fixed; (ii) the isothermal compressibility of water at room temperature is enforced; and (iii) the viscosity and Schmidt number can be specified as inputs. These impositions are possible with some extra degrees of freedom in the weighting functions for the conservative and dissipative forces. Numerical experiments show an improvement in the solution quality over conventional DPD parameters/weighting functions, particularly for the number density distribution and computed stresses.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Song, Qi; Song, Yong-Duan
2011-12-01
This paper investigates the position and velocity tracking control problem of high-speed trains with multiple vehicles connected through couplers. A dynamic model reflecting nonlinear and elastic impacts between adjacent vehicles as well as traction/braking nonlinearities and actuation faults is derived. Neuroadaptive fault-tolerant control algorithms are developed to account for various factors such as input nonlinearities, actuator failures, and uncertain impacts of in-train forces in the system simultaneously. The resultant control scheme is essentially independent of system model and is primarily data-driven because with the appropriate input-output data, the proposed control algorithms are capable of automatically generating the intermediate control parameters, neuro-weights, and the compensation signals, literally producing the traction/braking force based upon input and response data only--the whole process does not require precise information on system model or system parameter, nor human intervention. The effectiveness of the proposed approach is also confirmed through numerical simulations.
NASA Astrophysics Data System (ADS)
Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk
2016-04-01
Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
A thermal vacuum test optimization procedure
NASA Technical Reports Server (NTRS)
Kruger, R.; Norris, H. P.
1979-01-01
An analytical model was developed that can be used to establish certain parameters of a thermal vacuum environmental test program based on an optimization of program costs. This model is in the form of a computer program that interacts with a user insofar as the input of certain parameters. The program provides the user a list of pertinent information regarding an optimized test program and graphs of some of the parameters. The model is a first attempt in this area and includes numerous simplifications. The model appears useful as a general guide and provides a way for extrapolating past performance to future missions.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
Analysis of the connection of the timber-fiber concrete composite structure
NASA Astrophysics Data System (ADS)
Holý, Milan; Vráblík, Lukáš; Petřík, Vojtěch
2017-09-01
This paper deals with an implementation of the material parameters of the connection to complex models for analysis of the timber-fiber concrete composite structures. The aim of this article is to present a possible way of idealization of the continuous contact model that approximates the actual behavior of timber-fiber reinforced concrete structures. The presented model of the connection was derived from push-out shear tests. It was approved by use of the nonlinear numerical analysis, that it can be achieved a very good compliance between results of numerical simulations and results of the experiments by a suitable choice of the material parameters of the continuous contact. Finally, an application for an analytical calculation of timber-fiber concrete composite structures is developed for the practical use in engineering praxis. The input material parameters for the analytical model was received using data from experiments.
NASA Astrophysics Data System (ADS)
Byun, Do-Seong; Hart, Deirdre E.
2017-04-01
Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter calculation methods that avoid the use of Foreman's (1978) "northern semi-major axis convention" since, as revealed in our analysis, this commonly used conversion can result in inclination interpolation errors even when Cartesian coordinate-based "vector embodiment" solutions are employed.
Multiclassifier fusion in human brain MR segmentation: modelling convergence.
Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander
2006-01-01
Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.
Search-based model identification of smart-structure damage
NASA Technical Reports Server (NTRS)
Glass, B. J.; Macalou, A.
1991-01-01
This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui
2018-04-01
Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.
Jahantigh, Nabi; Keshavarz, Ali; Mirzaei, Masoud
2015-01-01
The aim of this study is to determine optimum hybrid heating systems parameters, such as temperature, surface area of a radiant heater and vent area to have thermal comfort conditions. DOE, Factorial design method is used to determine the optimum values for input parameters. A 3D model of a virtual standing thermal manikin with real dimensions is considered in this study. Continuity, momentum, energy, species equations for turbulent flow and physiological equation for thermal comfort are numerically solved to study heat, moisture and flow field. K - ɛRNG Model is used for turbulence modeling and DO method is used for radiation effects. Numerical results have a good agreement with the experimental data reported in the literature. The effect of various combinations of inlet parameters on thermal comfort is considered. According to Pareto graph, some of these combinations that have significant effect on the thermal comfort require no more energy can be used as useful tools. A better symmetrical velocity distribution around the manikin is also presented in the hybrid system.
Yuan Fang; Ge Sun; Peter Caldwell; Steven G. McNulty; Asko Noormets; Jean-Christophe Domec; John King; Zhiqiang Zhang; Xudong Zhang; Guanghui Lin; Guangsheng Zhou; Jingfeng Xiao; Jiquan Chen
2015-01-01
Evapotranspiration (ET) is arguably the most uncertain ecohydrologic variable for quantifying watershed water budgets. Although numerous ET and hydrological models exist, accurately predicting the effects of global change on water use and availability remains challenging because of model deficiency and/or a lack of input parameters. The objective of this study was to...
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Numerical simulations of flares on M dwarf stars. I - Hydrodynamics and coronal X-ray emission
NASA Technical Reports Server (NTRS)
Cheng, Chung-Chieh; Pallavicini, Roberto
1991-01-01
Flare-loop models are utilized to simulate the time evolution and physical characteristics of stellar X-ray flares by varying the values of flare-energy input and loop parameters. The hydrodynamic evolution is studied in terms of changes in the parameters of the mass, energy, and momentum equations within an area bounded by the chromosphere and the corona. The zone supports a magnetically confined loop for which processes are described including the expansion of heated coronal gas, chromospheric evaporation, and plasma compression at loop footpoints. The intensities, time profiles, and average coronal temperatures of X-ray flares are derived from the simulations and compared to observational evidence. Because the amount of evaporated material does not vary linearly with flare-energy input, large loops are required to produce the energy measured from stellar flares.
Numerical Modeling of Surface and Volumetric Cooling using Optimal T- and Y-shaped Flow Channels
NASA Astrophysics Data System (ADS)
Kosaraju, Srinivas
2017-11-01
The layout of T- and V-shaped flow channel networks on a surface can be optimized for minimum pressure drop and pumping power. The results of the optimization are in the form of geometric parameters such as length and diameter ratios of the stem and branch sections. While these flow channels are optimized for minimum pressure drop, they can also be used for surface and volumetric cooling applications such as heat exchangers, air conditioning and electronics cooling. In this paper, an effort has been made to study the heat transfer characteristics of multiple T- and Y-shaped flow channel configurations using numerical simulations. All configurations are subjected to same input parameters and heat generation constraints. Comparisons are made with similar results published in literature.
Parametric study of power absorption from electromagnetic waves by small ferrite spheres
NASA Technical Reports Server (NTRS)
Englert, Gerald W.
1989-01-01
Algebraic expressions in terms of elementary mathematical functions are derived for power absorption and dissipation by eddy currents and magnetic hysteresis in ferrite spheres. Skin depth is determined by using a variable inner radius in descriptive integral equations. Numerical results are presented for sphere diameters less than one wavelength. A generalized power absorption parameter for both eddy currents and hysteresis is expressed in terms of the independent parameters involving wave frequency, sphere radius, resistivity, and complex permeability. In general, the hysteresis phenomenon has a greater sensitivity to these independent parameters than do eddy currents over the ranges of independent parameters studied herein. Working curves are presented for obtaining power losses from input to the independent parameters.
AutoBayes Program Synthesis System Users Manual
NASA Technical Reports Server (NTRS)
Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd
2008-01-01
Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.
NASA Astrophysics Data System (ADS)
Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan
2016-11-01
In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.
NASA Astrophysics Data System (ADS)
Demirci, E.; Baykal, C.; Guler, I.
2016-12-01
In this study, hydrodynamic conditions due to river discharge, wave action and sea level fluctuations within a seven month period and the morphological response of the Manavgat river mouth are modeled with XBeach, a two-dimensional depth-averaged (2DH) numerical model developed to compute the natural coastal response during time-varying storm and hurricane conditions (Roelvink et al., 2010). The study area shows an active behavior on its nearshore morphology, thus, two jetties were constructed at the river mouth between years 1996-2000. Recently, Demirci et al. (2016) has studied the impacts of an excess river discharge and concurrent wave action and tidal fluctuations on the Manavgat river mouth morphology for the duration of 12 days (December 4th and 15th, 1998) while the construction of jetties were carried on. It is concluded that XBeach has presumed the final morphology fairly well with the calibrated set of input parameters. Here, the river mouth modeled at a further past date before the construction of jetties with the similar set of input parameters (between August 1st, 1995-March 8th, 1996) to reveal the drastic morphologic change near the mouth due to high river discharge and severe storms happened in a longer period of time. Wave climate effect is determined with the wave hindcasting model, W61, developed by Middle East Technical University-OERC with the NCEP-CFSR wind data as well as the sea level data. River discharge, wave and sea level data are introduced as input parameters in the XBeach numerical model and the final output morphological change is compared with the final bed level measurements. References:Demirci, E., Baykal, C., Guler, I., Ergin, A., & Sogut, E. (postponed). Numerical Modelling on Hydrodynamic Flow Conditions and Morphological Changes Using XBeach Near Manavgat River Mouth. Accepted as Oral presentation at the 35thInt. Conf. on Coastal Eng., Istanbul, Turkey. Guler, I., Ergin, A., Yalçıner, A. C., (2003). Monitoring Sediment Transport Processes at Manavgat River Mouth, Antalya Turkey. COPEDEC VI, 2003, Colombo, Sri Lanka Roelvink, D., Reniers, A., van Dongeren, A., van Thiel de Vries, J., Lescinski, J. and McCall, R., (2010). XBeach Model Description and Manual. Unesco-IHE Institute for Water Education, Deltares and Delft Univ. of Technology. Report June, 21, 2010 version 6.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schalk, W.W. III
Early actions of emergency responders during hazardous material releases are intended to assess contamination and potential public exposure. As measurements are collected, an integration of model calculations and measurements can assist to better understand the situation. This study applied a high resolution version of the operational 3-D numerical models used by Lawrence Livermore National Laboratory to a limited meteorological and tracer data set to assist in the interpretation of the dispersion pattern on a 140 km scale. The data set was collected from a tracer release during the morning surface inversion and transition period in the complex terrain of themore » Snake River Plain near Idaho Falls, Idaho in November 1993 by the United States Air Force. Sensitivity studies were conducted to determine model input parameters that best represented the study environment. These studies showed that mixing and boundary layer heights, atmospheric stability, and rawinsonde data are the most important model input parameters affecting wind field generation and tracer dispersion. Numerical models and limited measurement data were used to interpret dispersion patterns through the use of data analysis, model input determination, and sensitivity studies. Comparison of the best-estimate calculation to measurement data showed that model results compared well with the aircraft data, but had moderate success with the few surface measurements taken. The moderate success of the surface measurement comparison, may be due to limited downward mixing of the tracer as a result of the model resolution determined by the domain size selected to study the overall plume dispersion. 8 refs., 40 figs., 7 tabs.« less
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Numerical analysis of the heat source characteristics of a two-electrode TIG arc
NASA Astrophysics Data System (ADS)
Ogino, Y.; Hirata, Y.; Nomura, K.
2011-06-01
Various kinds of multi-electrode welding processes are used to ensure high productivity in industrial fields such as shipbuilding, automotive manufacturing and pipe fabrication. However, it is difficult to obtain the optimum welding conditions for a specific product, because there are many operating parameters, and because welding phenomena are very complicated. In the present research, the heat source characteristics of a two-electrode TIG arc were numerically investigated using a 3D arc plasma model with a focus on the distance between the two electrodes. The arc plasma shape changed significantly, depending on the electrode spacing. The heat source characteristics, such as the heat input density and the arc pressure distribution, changed significantly when the electrode separation was varied. The maximum arc pressure of the two-electrode TIG arc was much lower than that of a single-electrode TIG. However, the total heat input of the two-electrode TIG arc was nearly constant and was independent of the electrode spacing. These heat source characteristics of the two-electrode TIG arc are useful for controlling the heat input distribution at a low arc pressure. Therefore, these results indicate the possibility of a heat source based on a two-electrode TIG arc that is capable of high heat input at low pressures.
NASA Astrophysics Data System (ADS)
Saidi, B.; Giraud-Moreau, L.; Cherouat, A.; Nasri, R.
2017-09-01
AINSI 304L stainless steel sheets are commonly formed into a variety of shapes for applications in the industrial, architectural, transportation and automobile fields, it’s also used for manufacturing of denture base. In the field of dentistry, there is a need for personalized devises that are custom made for the patient. The single point incremental forming process is highly promising in this area for manufacturing of denture base. The single point incremental forming process (ISF) is an emerging process based on the use of a spherical tool, which is moved along CNC controlled tool path. One of the major advantages of this process is the ability to program several punch trajectories on the same machine in order to obtain different shapes. Several applications of this process exist in the medical field for the manufacturing of personalized titanium prosthesis (cranial plate, knee prosthesis...) due to the need of product customization to each patient. The objective of this paper is to study the incremental forming of AISI 304L stainless steel sheets for future applications in the dentistry field. During the incremental forming process, considerable forces can occur. The control of the forming force is particularly important to ensure the safe use of the CNC milling machine and preserve the tooling and machinery. In this paper, the effect of four different process parameters on the maximum force is studied. The proposed approach consists in using an experimental design based on experimental results. An analysis of variance was conducted with ANOVA to find the input parameters allowing to minimize the maximum forming force. A numerical simulation of the incremental forming process is performed with the optimal input process parameters. Numerical results are compared with the experimental ones.
Non-Gaussian statistics and optical rogue waves in stimulated Raman scattering.
Monfared, Yashar E; Ponomarenko, Sergey A
2017-03-20
We explore theoretically and numerically optical rogue wave formation in stimulated Raman scattering inside a hydrogen filled hollow core photonic crystal fiber. We assume a weak noisy Stokes pulse input and explicitly construct the input Stokes pulse ensemble using the coherent mode representation of optical coherence theory, thereby providing a link between optical coherence and rogue wave theories. We show that the Stokes pulse peak power probability distribution function (PDF) acquires a long tail in the limit of nearly incoherent input Stokes pulses. We demonstrate a clear link between the PDF tail magnitude and the source coherence time. Thus, the latter can serve as a convenient parameter to control the former. We explain our findings qualitatively using the concepts of statistical granularity and global degree of coherence.
Simulation of random road microprofile based on specified correlation function
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.
2018-03-01
The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
Transform methods for precision continuum and control models of flexible space structures
NASA Technical Reports Server (NTRS)
Lupi, Victor D.; Turner, James D.; Chun, Hon M.
1991-01-01
An open loop optimal control algorithm is developed for general flexible structures, based on Laplace transform methods. A distributed parameter model of the structure is first presented, followed by a derivation of the optimal control algorithm. The control inputs are expressed in terms of their Fourier series expansions, so that a numerical solution can be easily obtained. The algorithm deals directly with the transcendental transfer functions from control inputs to outputs of interest, and structural deformation penalties, as well as penalties on control effort, are included in the formulation. The algorithm is applied to several structures of increasing complexity to show its generality.
NASA Astrophysics Data System (ADS)
Wang, Y. M.; Xu, W. C.; Wu, S. Q.; Chai, C. W.; Liu, X.; Wang, S. H.
2018-03-01
The torsional oscillation is the dominant vibration form for the impression cylinder of printing machine (printing cylinder for short), directly restricting the printing speed up and reducing the quality of the prints. In order to reduce torsional vibration, the active control method for the printing cylinder is obtained. Taking the excitation force and moment from the cylinder gap and gripper teeth open & closing cam mechanism as variable parameters, authors establish the dynamic mathematical model of torsional vibration for the printing cylinder. The torsional active control method is based on Particle Swarm Optimization(PSO) algorithm to optimize input parameters for the serve motor. Furthermore, the input torque of the printing cylinder is optimized, and then compared with the numerical simulation results. The conclusions are that torsional vibration active control based on PSO is an availability method to the torsional vibration of printing cylinder.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Numerical simulation of an oxygen-fed wire-to-cylinder negative corona discharge in the glow regime
NASA Astrophysics Data System (ADS)
Yanallah, K.; Pontiga, F.; Castellanos, A.
2011-02-01
Negative glow corona discharge in flowing oxygen has been numerically simulated for a wire-to-cylinder electrode geometry. The corona discharge is modelled using a fluid approximation. The radial and axial distributions of charged and neutral species are obtained by solving the corresponding continuity equations, which include the relevant plasma-chemical kinetics. Continuity equations are coupled with Poisson's equation and the energy conservation equation, since the reaction rate constants may depend on the electric field and temperature. The experimental values of the current-voltage characteristic are used as input data into the numerical calculations. The role played by different reactions and chemical species is analysed, and the effect of electrical and geometrical parameters on ozone generation is investigated. The reliability of the numerical model is verified by the reasonable agreement between the numerical predictions of ozone concentration and the experimental measurements.
Discrete element weld model, phase 2
NASA Technical Reports Server (NTRS)
Prakash, C.; Samonds, M.; Singhal, A. K.
1987-01-01
A numerical method was developed for analyzing the tungsten inert gas (TIG) welding process. The phenomena being modeled include melting under the arc and the flow in the melt under the action of buoyancy, surface tension, and electromagnetic forces. The latter entails the calculation of the electric potential and the computation of electric current and magnetic field therefrom. Melting may occur at a single temperature or over a temperature range, and the electrical and thermal conductivities can be a function of temperature. Results of sample calculations are presented and discussed at length. A major research contribution has been the development of numerical methodology for the calculation of phase change problems in a fixed grid framework. The model has been implemented on CHAM's general purpose computer code PHOENICS. The inputs to the computer model include: geometric parameters, material properties, and weld process parameters.
Robust recognition of handwritten numerals based on dual cooperative network
NASA Technical Reports Server (NTRS)
Lee, Sukhan; Choi, Yeongwoo
1992-01-01
An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.
NASA Astrophysics Data System (ADS)
Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.
2016-09-01
The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
Comment on ``Symmetry and structure of quantized vortices in superfluid 3'
NASA Astrophysics Data System (ADS)
Sauls, J. A.; Serene, J. W.
1985-10-01
Recent theoretical attempts to explain the observed vortex-core phase transition in superfluid 3B yield conflicting results. Variational calculations by Fetter and Theodrakis, based on realistic strong-coupling parameters, yield a phase transition in the Ginzburg-Landau region that is in qualitative agreement with the phase diagram. Numerically precise calculations by Salomaa and Volivil (SV), based on the Brinkman-Serene-Anderson (BSA) parameters, do not yield a phase transition between axially symmetric vortices. The ambiguity of these results is in part due to the large differences between the β parameters, which are inputs to the vortex free-energy functional. We comment on the relative merits of the β parameters based on recent improvements in the quasiparticle scattering amplitude and the BSA parameters used by SV.
Wrapping Python around MODFLOW/MT3DMS based groundwater models
NASA Astrophysics Data System (ADS)
Post, V.
2008-12-01
Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.
NASA Technical Reports Server (NTRS)
Burns, R. E.
1973-01-01
The problem with predicting pollutant diffusion from a line source of arbitrary geometry is treated. The concentration at the line source may be arbitrarily varied with time. Special attention is given to the meteorological inputs which act as boundary conditions for the problem, and a mixing layer of arbitrary depth is assumed. Numerical application of the derived theory indicates the combinations of meteorological parameters that may be expected to result in high pollution concentrations.
Optical sensor in planar configuration based on multimode interference
NASA Astrophysics Data System (ADS)
Blahut, Marek
2017-08-01
In the paper a numerical analysis of optical sensors based on multimode interference in planar one-dimensional step-index configuration is presented. The structure consists in single-mode input and output waveguides and multimode waveguide which guide only few modes. Material parameters discussed refer to a SU8 polymer waveguide on SiO2 substrate. The optical system described will be designed to the analysis of biological substances.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Automated Calibration For Numerical Models Of Riverflow
NASA Astrophysics Data System (ADS)
Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey
2017-04-01
Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.
Chemical Transport in a Fissured Rock: Verification of a Numerical Model
NASA Astrophysics Data System (ADS)
Rasmuson, A.; Narasimhan, T. N.; Neretnieks, I.
1982-10-01
Numerical models for simulating chemical transport in fissured rocks constitute powerful tools for evaluating the acceptability of geological nuclear waste repositories. Due to the very long-term, high toxicity of some nuclear waste products, the models are required to predict, in certain cases, the spatial and temporal distribution of chemical concentration less than 0.001% of the concentration released from the repository. Whether numerical models can provide such accuracies is a major question addressed in the present work. To this end we have verified a numerical model, TRUMP, which solves the advective diffusion equation in general three dimensions, with or without decay and source terms. The method is based on an integrated finite difference approach. The model was verified against known analytic solution of the one-dimensional advection-diffusion problem, as well as the problem of advection-diffusion in a system of parallel fractures separated by spherical particles. The studies show that as long as the magnitude of advectance is equal to or less than that of conductance for the closed surface bounding any volume element in the region (that is, numerical Peclet number <2), the numerical method can indeed match the analytic solution within errors of ±10-3% or less. The realistic input parameters used in the sample calculations suggest that such a range of Peclet numbers is indeed likely to characterize deep groundwater systems in granitic and ancient argillaceous systems. Thus TRUMP in its present form does provide a viable tool for use in nuclear waste evaluation studies. A sensitivity analysis based on the analytic solution suggests that the errors in prediction introduced due to uncertainties in input parameters are likely to be larger than the computational inaccuracies introduced by the numerical model. Currently, a disadvantage in the TRUMP model is that the iterative method of solving the set of simultaneous equations is rather slow when time constants vary widely over the flow region. Although the iterative solution may be very desirable for large three-dimensional problems in order to minimize computer storage, it seems desirable to use a direct solver technique in conjunction with the mixed explicit-implicit approach whenever possible. Work in this direction is in progress.
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Research on axisymmetric aspheric surface numerical design and manufacturing technology
NASA Astrophysics Data System (ADS)
Wang, Zhen-zhong; Guo, Yin-biao; Lin, Zheng
2006-02-01
The key technology for aspheric machining offers exact machining path and machining aspheric lens with high accuracy and efficiency, in spite of the development of traditional manual manufacturing into nowadays numerical control (NC) machining. This paper presents a mathematical model between virtual cone and aspheric surface equations, and discusses the technology of uniform wear of grinding wheel and error compensation in aspheric machining. Finally, a software system for high precision aspheric surface manufacturing is designed and realized, based on the mentioned above. This software system can work out grinding wheel path according to input parameters and generate machining NC programs of aspheric surfaces.
NASA Astrophysics Data System (ADS)
Gazzarri, J. I.; Kesler, O.
In the first part of this two-paper series, we presented a numerical model of the impedance behaviour of a solid oxide fuel cell (SOFC) aimed at simulating the change in the impedance spectrum induced by contact degradation at the interconnect-electrode, and at the electrode-electrolyte interfaces. The purpose of that investigation was to develop a non-invasive diagnostic technique to identify degradation modes in situ. In the present paper, we appraise the predictive capabilities of the proposed method in terms of its robustness to uncertainties in the input parameters, many of which are very difficult to measure independently. We applied this technique to the degradation modes simulated in Part I, in addition to anode sulfur poisoning. Electrode delamination showed the highest robustness to input parameter variations, followed by interconnect oxidation and interconnect detachment. The most sensitive degradation mode was sulfur poisoning, due to strong parameter interactions. In addition, we simulate several simultaneous two-degradation-mode scenarios, assessing the method's capabilities and limitations for the prediction of electrochemical behaviour of SOFC's undergoing multiple simultaneous degradation modes.
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-01-01
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system. PMID:26343662
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection.
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-08-28
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.
Compensator improvement for multivariable control systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.
1977-01-01
A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.
Optimizing microwave photodetection: input-output theory
NASA Astrophysics Data System (ADS)
Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.
2018-04-01
High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.
Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Petrusca, Lorena; Auboiroux, Vincent; Cotton, François
2009-10-01
Endocavitary high intensity contact ultrasound (HICU) may offer interesting therapeutic potential for fighting localized cancer in esophageal or rectal wall. On-line MR guidance of the thermotherapy permits both excellent targeting of the pathological volume and accurate preoperatory monitoring of the temperature elevation. In this article, the authors address the issue of the automatic temperature control for endocavitary phased-array HICU and propose a tailor-made thermal model for this specific application. The convergence and stability of the feedback loop were investigated against tuning errors in the controller's parameters and against input noise, through ex vivo experimental studies and through numerical simulations in which nonlinear response of tissue was considered as expected in vivo. An MR-compatible, 64-element, cooled-tip, endorectal cylindrical phased-array applicator of contact ultrasound was integrated with fast MR thermometry to provide automatic feedback control of the temperature evolution. An appropriate phase law was applied per set of eight adjacent transducers to generate a quasiplanar wave, or a slightly convergent one (over the circular dimension). A 2D physical model, compatible with on-line numerical implementation, took into account (1) the ultrasound-mediated energy deposition, (2) the heat diffusion in tissue, and (3) the heat sink effect in the tissue adjacent to the tip-cooling balloon. This linear model was coupled to a PID compensation algorithm to obtain a multi-input single-output static-tuning temperature controller. Either the temperature at one static point in space (situated on the symmetry axis of the beam) or the maximum temperature in a user-defined ROI was tracked according to a predefined target curve. The convergence domain in the space of controller's parameters was experimentally explored ex vivo. The behavior of the static-tuning PID controller was numerically simulated based on a discrete-time iterative solution of the bioheat transfer equation in 3D and considering temperature-dependent ultrasound absorption and blood perfusion. The intrinsic accuracy of the implemented controller was approximately 1% in ex vivo trials when providing correct estimates for energy deposition and heat diffusivity. Moreover, the feedback loop demonstrated excellent convergence and stability over a wide range of the controller's parameters, deliberately set to erroneous values. In the extreme case of strong underestimation of the ultrasound energy deposition in tissue, the temperature tracking curve alone, at the initial stage of the MR-controlled HICU treatment, was not a sufficient indicator for a globally stable behavior of the feedback loop. Our simulations predicted that the controller would be able to compensate for tissue perfusion and for temperature-dependent ultrasound absorption, although these effects were not included in the controller's equation. The explicit pattern of acoustic field was not required as input information for the controller, avoiding time-consuming numerical operations. The study demonstrated the potential advantages of PID-based automatic temperature control adapted to phased-array MR-guided HICU therapy. Further studies will address the integration of this ultrasound device with a miniature RF coil for high resolution MRI and, subsequently, the experimental behavior of the controller in vivo.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Modeling of a ring rosen-type piezoelectric transformer by Hamilton's principle.
Nadal, Clément; Pigache, Francois; Erhart, Jiří
2015-04-01
This paper deals with the analytical modeling of a ring Rosen-type piezoelectric transformer. The developed model is based on a Hamiltonian approach, enabling to obtain main parameters and performance evaluation for the first radial vibratory modes. Methodology is detailed, and final results, both the input admittance and the electric potential distribution on the surface of the secondary part, are compared with numerical and experimental ones for discussion and validation.
NASA Astrophysics Data System (ADS)
Karmakar, Pralay Kumar
This article describes the equilibrium structure of the solar interior plasma (SIP) and solar wind plasma (SWP) in detail under the framework of the gravito-electrostatic sheath (GES) model. This model gives a precise definition of the solar surface boundary (SSB), surface origin mechanism of the subsonic SWP, and its supersonic acceleration. Equilibrium parameters like plasma potential, self-gravity, population density, flow, their gradients, and all the relevant inhomogeneity scale lengths are numerically calculated and analyzed as an initial value problem. Physical significance of the structure condition for the SSB is discussed. The plasma oscillation and Jeans time scales are also plotted and compared. In addition, different coupling parameters, and electric current profiles are also numerically studied. The current profiles exhibit an important behavior of directional reversibility, i.e., an electrodynamical transition from negative to positive value. It occurs beyond a few Jeans lengths away from the SSB. The virtual spherical surface lying at the current reversal point, where the net current becomes zero, has the property of a floating surface behavior of the real physical wall. Our investigation indicates that the SWP behaves as an ion current-carrying plasma system. The basic mechanism behind the GES formation and its distinctions from conventional plasma sheath are discussed. The electromagnetic properties of the Sun derived from our model with the most accurate available inputs are compared with those of others. These results are useful as an input element to study the properties of the linear and nonlinear dynamics of various solar plasma waves, oscillations and instabilities.
Study on loading path optimization of internal high pressure forming process
NASA Astrophysics Data System (ADS)
Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng
2017-09-01
In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Charging of the Van Allen Probes: Theory and Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.
2017-12-01
The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately predict spacecraft charging.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
NASA Technical Reports Server (NTRS)
Nesbitt, J. A.
1983-01-01
Degradation of NiCrAlZr overlay coatings on various NiCrAl substrates was examined after cyclic oxidation. Concentration/distance profiles were measured in the coating and substrate after various oxidation exposures at 1150 C. For each stubstrate, the Al content in the coating decreased rapidly. The concentration/distance profiles, and particularly that for Al, reflected the oxide spalling resistance of each coated substrate. A numerical model was developed to simulate diffusion associated with overlay-coating degradation by oxidation and coating/substrate interdiffusion. Input to the numerical model consisted of the Cr and Al content of the coating and substrate, ternary diffusivities, and various oxide spalling parameters. The model predicts the Cr and Al concentrations in the coating and substrate after any number of oxidation/thermal cycles. The numerical model also predicts coating failure based on the ability of the coating to supply sufficient Al to the oxide scale. The validity of the model was confirmed by comparison of the predicted and measured concentration/distance profiles. The model was subsequently used to identify the most critical system parameters affecting coating life.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations
NASA Technical Reports Server (NTRS)
Chan, William M.
2004-01-01
Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.
Analysis of Discontinuity Induced Bifurcations in a Dual Input DC-DC Converter
NASA Astrophysics Data System (ADS)
Giaouris, Damian; Banerjee, Soumitro; Mandal, Kuntal; Al-Hindawi, Mohammed M.; Abusorrah, Abdullah; Al-Turki, Yusuf; El Aroudi, Abdelali
DC-DC power converters with multiple inputs and a single output are used in numerous applications where multiple sources, e.g. two or more renewable energy sources and/or a battery, feed a single load. In this work, a classical boost converter topology with two input branches connected to two different sources is chosen, with each branch independently being controlled by a separate peak current mode controller. We demonstrate for the first time that even though this converter is similar to other well known topologies that have been studied before, it exhibits many complex nonlinear behaviors that are not found in any other standard PWM controlled power converter. The system undergoes period incrementing cascade as a parameter is varied, with discontinuous hard transitions between consecutive periodicities. We show that the system can be described by a discontinuous map, which explains the observed bifurcation phenomena. The results have been experimentally validated.
NASA Astrophysics Data System (ADS)
Andreev, M. Yu.; Mingaleva, G. I.; Mingalev, V. S.
2007-08-01
A previously developed model of the high-latitude ionosphere is used to calculate the distribution of the ionospheric parameters in the polar region. A specific method for specifying input parameters of the mathematical model, using the experimental data obtained by the method of satellite radio tomography, is used in this case. The spatial distributions of the ionospheric parameters characterized by a complex inhomogeneous structure in the high-latitude region, calculated with the help of the mathematical model, are used to simulate the HF propagation along the meridionally oriented radio paths extending from middle to high latitudes. The method for improving the HF communication between a midlatitude transmitter and a polar-cap receiver is proposed.
A Spreadsheet Simulation Tool for Terrestrial and Planetary Balloon Design
NASA Technical Reports Server (NTRS)
Raquea, Steven M.
1999-01-01
During the early stages of new balloon design and development, it is necessary to conduct many trade studies. These trade studies are required to determine the design space, and aid significantly in determining overall feasibility. Numerous point designs then need to be generated as details of payloads, materials, mission, and manufacturing are determined. To accomplish these numerous designs, transient models are both unnecessary and time intensive. A steady state model that uses appropriate design inputs to generate system-level descriptive parameters can be very flexible and fast. Just such a steady state model has been developed and has been used during both the MABS 2001 Mars balloon study and the Ultra Long Duration Balloon Project. Using Microsoft Excel's built-in iteration routine, a model was built. Separate sheets were used for performance, structural design, materials, and thermal analysis as well as input and output sheets. As can be seen from figure 1, the model takes basic performance requirements, weight estimates, design parameters, and environmental conditions and generates a system level balloon design. Figure 2 shows a sample output of the model. By changing the inputs and a few of the equations in the model, balloons on earth or other planets can be modeled. There are currently several variations of the model for terrestrial and Mars balloons, as well there are versions of the model that perform crude material design based on strength and weight requirements. To perform trade studies, the Visual Basic language built into Excel was used to create an automated matrix of designs. This trade study module allows a three dimensional trade surface to be generated by using a series of values for any two design variables. Once the fixed and variable inputs are defined, the model automatically steps through the input matrix and fills a spreadsheet with the resulting point designs. The proposed paper will describe the model in detail, including current variations. The assumptions, governing equations, and capabilities will be addressed. Detailed examples of the model in practice will also be used.
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-04-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.
The significance of parameter uncertainties for the prediction of offshore pile driving noise.
Lippert, Tristan; von Estorff, Otto
2014-11-01
Due to the construction of offshore wind farms and its potential effect on marine wildlife, the numerical prediction of pile driving noise over long ranges has recently gained importance. In this contribution, a coupled finite element/wavenumber integration model for noise prediction is presented and validated by measurements. The ocean environment, especially the sea bottom, can only be characterized with limited accuracy in terms of input parameters for the numerical model at hand. Therefore the effect of these parameter uncertainties on the prediction of sound pressure levels (SPLs) in the water column is investigated by a probabilistic approach. In fact, a variation of the bottom material parameters by means of Monte-Carlo simulations shows significant effects on the predicted SPLs. A sensitivity analysis of the model with respect to the single quantities is performed, as well as a global variation. Based on the latter, the probability distribution of the SPLs at an exemplary receiver position is evaluated and compared to measurements. The aim of this procedure is to develop a model to reliably predict an interval for the SPLs, by quantifying the degree of uncertainty of the SPLs with the MC simulations.
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
Heidari, Mohammad; Heidari, Ali; Homaei, Hadi
2014-01-01
The static pull-in instability of beam-type microelectromechanical systems (MEMS) is theoretically investigated. Two engineering cases including cantilever and double cantilever microbeam are considered. Considering the midplane stretching as the source of the nonlinearity in the beam behavior, a nonlinear size-dependent Euler-Bernoulli beam model is used based on a modified couple stress theory, capable of capturing the size effect. By selecting a range of geometric parameters such as beam lengths, width, thickness, gaps, and size effect, we identify the static pull-in instability voltage. A MAPLE package is employed to solve the nonlinear differential governing equations to obtain the static pull-in instability voltage of microbeams. Radial basis function artificial neural network with two functions has been used for modeling the static pull-in instability of microcantilever beam. The network has four inputs of length, width, gap, and the ratio of height to scale parameter of beam as the independent process variables, and the output is static pull-in voltage of microbeam. Numerical data, employed for training the network, and capabilities of the model have been verified in predicting the pull-in instability behavior. The output obtained from neural network model is compared with numerical results, and the amount of relative error has been calculated. Based on this verification error, it is shown that the radial basis function of neural network has the average error of 4.55% in predicting pull-in voltage of cantilever microbeam. Further analysis of pull-in instability of beam under different input conditions has been investigated and comparison results of modeling with numerical considerations shows a good agreement, which also proves the feasibility and effectiveness of the adopted approach. The results reveal significant influences of size effect and geometric parameters on the static pull-in instability voltage of MEMS. PMID:24860602
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Parental Numeric Language Input to Mandarin Chinese and English Speaking Preschool Children
ERIC Educational Resources Information Center
Chang, Alicia; Sandhofer, Catherine M.; Adelchanow, Lauren; Rottman, Benjamin
2011-01-01
The present study examined the number-specific parental language input to Mandarin- and English-speaking preschool-aged children. Mandarin and English transcripts from the CHILDES database were examined for amount of numeric speech, specific types of numeric speech and syntactic frames in which numeric speech appeared. The results showed that…
2017 GTO Project review Laboratory Evaluation of EGS Shear Stimulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Stephen J.
The objectives and purpose of this research has been to produce laboratory-based experimental and numerical analyses to provide a physics-based understanding of shear stimulation phenomena (hydroshearing) and its evolution during stimulation. Water was flowed along fractures in hot and stressed fractured rock, to promote slip. The controlled laboratory experiments provide a high resolution/high quality data resource for evaluation of analysis methods developed by DOE to assess EGS “behavior” during this stimulation process. Segments of the experimental program will provide data sets for model input parameters, i.e., material properties, and other segments of the experimental program will represent small scale physicalmore » models of an EGS system, which may be modeled. The coupled lab/analysis project has been a study of the response of a fracture in hot, water-saturated fractured rock to shear stress experiencing fluid flow. Under this condition, the fracture experiences a combination of potential pore pressure changes and fracture surface cooling, resulting in slip along the fracture. The laboratory work provides a means to assess the role of “hydroshearing” on permeability enhancement in reservoir stimulation. Using the laboratory experiments and results to define boundary and input/output conditions of pore pressure, thermal stress, fracture shear deformation and fluid flow, and models were developed and simulations completed by the University of Oklahoma team. The analysis methods are ones used on field scale problems. The sophisticated numerical models developed contain parameters present in the field. The analysis results provide insight into the role of fracture slip on permeability enhancement-“hydroshear” is to be obtained. The work will provide valuable input data to evaluate stimulation models, thus helping design effective EGS.« less
Direct calculation of modal parameters from matrix orthogonal polynomials
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Guillaume, Patrick
2011-10-01
The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.
Characterisation of the Hamamatsu photomultipliers for the KM3NeT Neutrino Telescope
NASA Astrophysics Data System (ADS)
Aiello, S.; Akrame, S. E.; Ameli, F.; Anassontzis, E. G.; Andre, M.; Androulakis, G.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aublin, J.; Avgitas, T.; Baars, M.; Bagatelas, C.; Barbarino, G.; Baret, B.; Barrios-Martí, J.; Belias, A.; Berbee, E.; van den Berg, A.; Bertin, V.; Biagi, S.; Biagioni, A.; Biernoth, C.; Bormuth, R.; Boumaaza, J.; Bourret, S.; Bouwhuis, M.; Bozza, C.; Brânzaş, H.; Briukhanova, N.; Bruijn, R.; Brunner, J.; Buis, E.; Buompane, R.; Busto, J.; Calvo, D.; Capone, A.; Caramete, L.; Celli, S.; Chabab, M.; Cherubini, S.; Chiarella, V.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coelho, J. A. B.; Coleiro, A.; Colomer Molla, M.; Coniglione, R.; Coyle, P.; Creusot, A.; Cuttone, G.; D'Onofrio, A.; Dallier, R.; De Sio, C.; Di Palma, I.; Díaz, A. F.; Distefano, C.; Domi, A.; Donà, R.; Donzaud, C.; Dornic, D.; Dörr, M.; Durocher, M.; Eberl, T.; van Eijk, D.; El Bojaddaini, I.; Elsaesser, D.; Enzenhöfer, A.; Ferrara, G.; Fusco, L. A.; Gal, T.; Garufi, F.; Gauchery, S.; Geißelsöder, S.; Gialanella, L.; Giorgio, E.; Giuliante, A.; Gozzini, S. R.; Ruiz, R. Gracia; Graf, K.; Grasso, D.; Grégoire, T.; Grella, G.; Hallmann, S.; van Haren, H.; Heid, T.; Heijboer, A.; Hekalo, A.; Hernández-Rey, J. J.; Hofestädt, J.; Illuminati, G.; James, C. W.; Jongen, M.; Jongewaard, B.; de Jong, M.; de Jong, P.; Kadler, M.; Kalaczyński, P.; Kalekin, O.; Katz, U. F.; Chowdhury, N. R. Khan; Kieft, G.; Kießling, D.; Koffeman, E. N.; Kooijman, P.; Kouchner, A.; Kreter, M.; Kulikovskiy, V.; Lahmann, R.; Le Breton, R.; Leone, F.; Leonora, E.; Levi, G.; Lincetto, M.; Lonardo, A.; Longhitano, F.; Lotze, M.; Loucatos, S.; Maggi, G.; Mańczak, J.; Mannheim, K.; Margiotta, A.; Marinelli, A.; Markou, C.; Martin, L.; Martínez-Mora, J. A.; Martini, A.; Marzaioli, F.; Mele, R.; Melis, K. W.; Migliozzi, P.; Migneco, E.; Mijakowski, P.; Miranda, L. S.; Mollo, C. M.; Morganti, M.; Moser, M.; Moussa, A.; Muller, R.; Musumeci, M.; Nauta, L.; Navas, S.; Nicolau, C. A.; Nielsen, C.; Organokov, M.; Orlando, A.; Panagopoulos, V.; Papalashvili, G.; Papaleo, R.; Păvălaş, G. E.; Pellegrini, G.; Pellegrino, C.; Pérez Romero, J.; Perrin-Terrin, M.; Piattelli, P.; Pikounis, K.; Pisanti, O.; Poirè, C.; Polydefki, G.; Poma, G. E.; Popa, V.; Post, M.; Pradier, T.; Pühlhofer, G.; Pulvirenti, S.; Quinn, L.; Raffaelli, F.; Randazzo, N.; Razzaque, S.; Real, D.; Resvanis, L.; Reubelt, J.; Riccobene, G.; Richer, M.; Rovelli, A.; Salvadori, I.; Samtleben, D. F. E.; Sánchez Losa, A.; Sanguineti, M.; Santangelo, A.; Sapienza, P.; Schermer, B.; Sciacca, V.; Seneca, J.; Sgura, I.; Shanidze, R.; Sharma, A.; Simeone, F.; Sinopoulou, A.; Spisso, B.; Spurio, M.; Stavropoulos, D.; Steijger, J.; Stellacci, S. M.; Strandberg, B.; Stransky, D.; Stüven, T.; Taiuti, M.; Tatone, F.; Tayalati, Y.; Tenllado, E.; Thakore, T.; Timmer, P.; Trovato, A.; Tsagkli, S.; Tzamariudaki, E.; Tzanetatos, D.; Valieri, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Viola, S.; Vivolo, D.; Volkert, M.; de Waardt, L.; Wilms, J.; de Wolf, E.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.
2018-05-01
The Hamamatsu R12199-02 3-inch photomultiplier tube is the photodetector chosen for the first phase of the KM3NeT neutrino telescope. About 7000 photomultipliers have been characterised for dark count rate, timing spread and spurious pulses. The quantum efficiency, the gain and the peak-to-valley ratio have also been measured for a sub-sample in order to determine parameter values needed as input to numerical simulations of the detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Weizhao; Ren, Huaqing; Lu, Jie
This paper reports several characterization methods of the properties of the uncured woven prepreg during the preforming process. The uniaxial tension, bias-extension, and bending tests are conducted to measure the in-plane properties of the material. The friction tests utilized to reveal the prepreg-prepreg and prepreg-forming tool interactions. All these tests are performed within the temperature range of the real manufacturing process. The results serve as the inputs to the numerical simulation for the product prediction and preforming process parameter optimization.
Simple analysis of scattering data with the Ornstein-Zernike equation
NASA Astrophysics Data System (ADS)
Kats, E. I.; Muratov, A. R.
2018-01-01
In this paper we propose and explore a method of analysis of the scattering experimental data for uniform liquidlike systems. In our pragmatic approach we are not trying to introduce by hands an artificial small parameter to work out a perturbation theory with respect to the known results, e.g., for hard spheres or sticky hard spheres (all the more that in the agreement with the notorious Landau statement, there is no physical small parameter for liquids). Instead of it being guided by the experimental data we are solving the Ornstein-Zernike equation with a trial (variational) form of the interparticle interaction potential. To find all needed correlation functions this variational input is iterated numerically to satisfy the Ornstein-Zernike equation supplemented by a closure relation. Our method is developed for spherically symmetric scattering objects, and our numeric code is written for such a case. However, it can be extended (at the expense of more involved computations and a larger amount of required experimental input information) for nonspherical particles. What is important for our approach is that it is sufficient to know experimental data in a relatively narrow range of the scattering wave vectors (q ) to compute the static structure factor in a much broader range of q . We illustrate by a few model and real experimental examples of the x-ray and neutron scattering data how the approach works.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Niemeyer, Frank; Simon, Ulrich; Wehner, Tim
2013-09-06
Numerical models of secondary fracture healing are based on mechanoregulatory algorithms that use distortional strain alone or in combination with either dilatational strain or fluid velocity as determining stimuli for tissue differentiation and development. Comparison of these algorithms has previously suggested that healing processes under torsional rotational loading can only be properly simulated by considering fluid velocity and deviatoric strain as the regulatory stimuli. We hypothesize that sufficient calibration on uncertain input parameters will enhance our existing model, which uses distortional and dilatational strains as determining stimuli, to properly simulate fracture healing under various loading conditions including also torsional rotation. Therefore, we minimized the difference between numerically simulated and experimentally measured courses of interfragmentary movements of two axial compressive cases and two shear load cases (torsional and translational) by varying several input parameter values within their predefined bounds. The calibrated model was then qualitatively evaluated on the ability to predict physiological changes of spatial and temporal tissue distributions, based on respective in vivo data. Finally, we corroborated the model on five additional axial compressive and one asymmetrical bending load case. We conclude that our model, using distortional and dilatational strains as determining stimuli, is able to simulate fracture-healing processes not only under axial compression and torsional rotation but also under translational shear and asymmetrical bending loading conditions.
NASA Astrophysics Data System (ADS)
Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den
2016-08-01
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.
NASA Astrophysics Data System (ADS)
Becker, M. W.; Bursik, M. I.; Schuetz, J. W.
2001-05-01
The Hubbard Brook Experimental Forest (HBEF) of Central New Hampshire has been a focal point for collaborative hydrologic research for over 40 years. A tremendous amount of data from this area is available through the internet and other sources, but is not organized in a manner that facilitates teaching of hydrologic concepts. The Mirror Lake Watershed Interactive Teaching Database is making hydrologic data from the HBEF and associated interactive problem sets available to upper-level and post-graduate university students through a web-based resource. Hydrologic data are offered via a three-dimensional VRML (Virtual Reality Modeling Language) interface, that facilitates viewing and retrieval in a spatially meaningful manner. Available data are mapped onto a topographic base, and hot spots representing data collection points (e.g. weirs) lead to time-series displays (e.g. hydrographs) that provide a temporal link to the spatially organized data. Associated instructional exercises are designed to increase understanding of both hydrologic data and hydrologic methods. A pedagogical module concerning numerical ground-water modeling will be presented as an example. Numerical modeling of ground-water flow involves choosing the combination of hydrogeologic parameters (e.g. hydraulic conductivity, recharge) that cause model-predicted heads to best match measured heads in the aquifer. Choosing the right combination of parameters requires careful judgment based upon knowledge of the hydrogeologic system and the physics of ground-water flow. Unfortunately, students often get caught up in the technical aspects and lose sight of the fundamentals when working with real ground-water software. This module provides exercises in which a student chooses model parameters and immediately sees the predicted results as a 3-D VRML object. VRML objects are based upon actual Modflow model results corresponding to the range of model input parameters available to the student. This way, the student can have a hands-on experience with a numerical model without getting bogged down in the details. Connecting model input directly to 3-D model output better allows students to test their intuition about ground-water behavior in an interactive and entertaining way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
NASA Astrophysics Data System (ADS)
Termini, Donatella
2013-04-01
Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77
Uncertainties in Galactic Chemical Evolution Models
Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...
2016-06-15
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cote, Benoit; Ritter, Christian; Oshea, Brian W.
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
B-spline Method in Fluid Dynamics
NASA Technical Reports Server (NTRS)
Botella, Olivier; Shariff, Karim; Mansour, Nagi N. (Technical Monitor)
2001-01-01
B-spline functions are bases for piecewise polynomials that possess attractive properties for complex flow simulations : they have compact support, provide a straightforward handling of boundary conditions and grid nonuniformities, and yield numerical schemes with high resolving power, where the order of accuracy is a mere input parameter. This paper reviews the progress made on the development and application of B-spline numerical methods to computational fluid dynamics problems. Basic B-spline approximation properties is investigated, and their relationship with conventional numerical methods is reviewed. Some fundamental developments towards efficient complex geometry spline methods are covered, such as local interpolation methods, fast solution algorithms on cartesian grid, non-conformal block-structured discretization, formulation of spline bases of higher continuity over triangulation, and treatment of pressure oscillations in Navier-Stokes equations. Application of some of these techniques to the computation of viscous incompressible flows is presented.
Dragna, Didier; Blanc-Benon, Philippe; Poisson, Franck
2014-03-01
Results from outdoor acoustic measurements performed in a railway site near Reims in France in May 2010 are compared to those obtained from a finite-difference time-domain solver of the linearized Euler equations. During the experiments, the ground profile and the different ground surface impedances were determined. Meteorological measurements were also performed to deduce mean vertical profiles of wind and temperature. An alarm pistol was used as a source of impulse signals and three microphones were located along a propagation path. The various measured parameters are introduced as input data into the numerical solver. In the frequency domain, the numerical results are in good accordance with the measurements up to a frequency of 2 kHz. In the time domain, except a time shift, the predicted waveforms match the measured waveforms with a close agreement.
NASA Astrophysics Data System (ADS)
Snow, Michael G.; Bajaj, Anil K.
2015-08-01
This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
NASA Astrophysics Data System (ADS)
Oral, Elif; Gélis, Céline; Bonilla, Luis Fabián; Delavaud, Elise
2017-12-01
Numerical modelling of seismic wave propagation, considering soil nonlinearity, has become a major topic in seismic hazard studies when strong shaking is involved under particular soil conditions. Indeed, when strong ground motion propagates in saturated soils, pore pressure is another important parameter to take into account when successive phases of contractive and dilatant soil behaviour are expected. Here, we model 1-D seismic wave propagation in linear and nonlinear media using the spectral element numerical method. The study uses a three-component (3C) nonlinear rheology and includes pore-pressure excess. The 1-D-3C model is used to study the 1987 Superstition Hills earthquake (ML 6.6), which was recorded at the Wildlife Refuge Liquefaction Array, USA. The data of this event present strong soil nonlinearity involving pore-pressure effects. The ground motion is numerically modelled for different assumptions on soil rheology and input motion (1C versus 3C), using the recorded borehole signals as input motion. The computed acceleration-time histories show low-frequency amplification and strong high-frequency damping due to the development of pore pressure in one of the soil layers. Furthermore, the soil is found to be more nonlinear and more dilatant under triaxial loading compared to the classical 1C analysis, and significant differences in surface displacements are observed between the 1C and 3C approaches. This study contributes to identify and understand the dominant phenomena occurring in superficial layers, depending on local soil properties and input motions, conditions relevant for site-specific studies.
Active vibration and noise control of vibro-acoustic system by using PID controller
NASA Astrophysics Data System (ADS)
Li, Yunlong; Wang, Xiaojun; Huang, Ren; Qiu, Zhiping
2015-07-01
Active control simulation of the acoustic and vibration response of a vibro-acoustic cavity of an airplane based on a PID controller is presented. A full numerical vibro-acoustic model is developed by using an Eulerian model, which is a coupled model based on the finite element formulation. The reduced order model, which is used to design the closed-loop control system, is obtained by the combination of modal expansion and variable substitution. Some physical experiments are made to validate and update the full-order and the reduced-order numerical models. Optimization of the actuator placement is employed in order to get an effective closed-loop control system. For the controller design, an iterative method is used to determine the optimal parameters of the PID controller. The process is illustrated by the design of an active noise and vibration control system for a cavity structure. The numerical and experimental results show that a PID-based active control system can effectively suppress the noise inside the cavity using a sound pressure signal as the controller input. It is also possible to control the noise by suppressing the vibration of the structure using the structural displacement signal as the controller input. For an airplane cavity structure, considering the issue of space-saving, the latter is more suitable.
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
Modeling of transport phenomena in tokamak plasmas with neural networks
Meneghini, Orso; Luna, Christopher J.; Smith, Sterling P.; ...
2014-06-23
A new transport model that uses neural networks (NNs) to yield electron and ion heat ux pro les has been developed. Given a set of local dimensionless plasma parameters similar to the ones that the highest delity models use, the NN model is able to efficiently and accurately predict the ion and electron heat transport pro les. As a benchmark, a NN was built, trained, and tested on data from the 2012 and 2013 DIII-D experimental campaigns. It is found that NN can capture the experimental behavior over the majority of the plasma radius and across a broad range ofmore » plasma regimes. Although each radial location is calculated independently from the others, the heat ux pro les are smooth, suggesting that the solution found by the NN is a smooth function of the local input parameters. This result supports the evidence of a well-de ned, non-stochastic relationship between the input parameters and the experimentally measured transport uxes. Finally, the numerical efficiency of this method, requiring only a few CPU-μs per data point, makes it ideal for scenario development simulations and real-time plasma control.« less
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi
2017-07-01
We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Sleeter, Rachel; Acevedo, William; Soulard, Christopher E.; Sleeter, Benjamin M.
2015-01-01
Spatially-explicit state-and-transition simulation models of land use and land cover (LULC) increase our ability to assess regional landscape characteristics and associated carbon dynamics across multiple scenarios. By characterizing appropriate spatial attributes such as forest age and land-use distribution, a state-and-transition model can more effectively simulate the pattern and spread of LULC changes. This manuscript describes the methods and input parameters of the Land Use and Carbon Scenario Simulator (LUCAS), a customized state-and-transition simulation model utilized to assess the relative impacts of LULC on carbon stocks for the conterminous U.S. The methods and input parameters are spatially explicit and describe initial conditions (strata, state classes and forest age), spatial multipliers, and carbon stock density. Initial conditions were derived from harmonization of multi-temporal data characterizing changes in land use as well as land cover. Harmonization combines numerous national-level datasets through a cell-based data fusion process to generate maps of primary LULC categories. Forest age was parameterized using data from the North American Carbon Program and spatially-explicit maps showing the locations of past disturbances (i.e. wildfire and harvest). Spatial multipliers were developed to spatially constrain the location of future LULC transitions. Based on distance-decay theory, maps were generated to guide the placement of changes related to forest harvest, agricultural intensification/extensification, and urbanization. We analyze the spatially-explicit input parameters with a sensitivity analysis, by showing how LUCAS responds to variations in the model input. This manuscript uses Mediterranean California as a regional subset to highlight local to regional aspects of land change, which demonstrates the utility of LUCAS at many scales and applications.
Chemical transport in a fissured rock: Verification of a numerical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmuson, A.; Narasimhan, T. N.; Neretnieks, I.
1982-10-01
Numerical models for simulating chemical transport in fissured rocks constitute powerful tools for evaluating the acceptability of geological nuclear waste repositories. Due to the very long-term, high toxicity of some nuclear waste products, the models are required to predict, in certain cases, the spatial and temporal distribution of chemical concentration less than 0.001% of the concentration released from the repository. Whether numerical models can provide such accuracies is a major question addressed in the present work. To this end, we have verified a numerical model, TRUMP, which solves the advective diffusion equation in general three dimensions with or without decaymore » and source terms. The method is based on an integrated finite-difference approach. The model was verified against known analytic solution of the one-dimensional advection-diffusion problem as well as the problem of advection-diffusion in a system of parallel fractures separated by spherical particles. The studies show that as long as the magnitude of advectance is equal to or less than that of conductance for the closed surface bounding any volume element in the region (that is, numerical Peclet number <2), the numerical method can indeed match the analytic solution within errors of ±10{sup -3} % or less. The realistic input parameters used in the sample calculations suggest that such a range of Peclet numbers is indeed likely to characterize deep groundwater systems in granitic and ancient argillaceous systems. Thus TRUMP in its present form does provide a viable tool for use in nuclear waste evaluation studies. A sensitivity analysis based on the analytic solution suggests that the errors in prediction introduced due to uncertainties in input parameters is likely to be larger than the computational inaccuracies introduced by the numerical model. Currently, a disadvantage in the TRUMP model is that the iterative method of solving the set of simultaneous equations is rather slow when time constants vary widely over the flow region. Although the iterative solution may be very desirable for large three-dimensional problems in order to minimize computer storage, it seems desirable to use a direct solver technique in conjunction with the mixed explicit-implicit approach whenever possible. work in this direction is in progress.« less
Development of a hydraulic model of the human systemic circulation
NASA Technical Reports Server (NTRS)
Sharp, M. K.; Dharmalingham, R. K.
1999-01-01
Physical and numeric models of the human circulation are constructed for a number of objectives, including studies and training in physiologic control, interpretation of clinical observations, and testing of prosthetic cardiovascular devices. For many of these purposes it is important to quantitatively validate the dynamic response of the models in terms of the input impedance (Z = oscillatory pressure/oscillatory flow). To address this need, the authors developed an improved physical model. Using a computer study, the authors first identified the configuration of lumped parameter elements in a model of the systemic circulation; the result was a good match with human aortic input impedance with a minimum number of elements. Design, construction, and testing of a hydraulic model analogous to the computer model followed. Numeric results showed that a three element model with two resistors and one compliance produced reasonable matching without undue complication. The subsequent analogous hydraulic model included adjustable resistors incorporating a sliding plate to vary the flow area through a porous material and an adjustable compliance consisting of a variable-volume air chamber. The response of the hydraulic model compared favorably with other circulation models.
Poles of the Zagreb analysis partial-wave T matrices
NASA Astrophysics Data System (ADS)
Batinić, M.; Ceci, S.; Švarc, A.; Zauner, B.
2010-09-01
The Zagreb analysis partial-wave T matrices included in the Review of Particle Physics [by the Particle Data Group (PDG)] contain Breit-Wigner parameters only. As the advantages of pole over Breit-Wigner parameters in quantifying scattering matrix resonant states are becoming indisputable, we supplement the original solution with the pole parameters. Because of an already reported numeric error in the S11 analytic continuation [Batinić , Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.57.1004 57, 1004(E) (1997); arXiv:nucl-th/9703023], we declare the old BATINIC 95 solution, presently included by the PDG, invalid. Instead, we offer two new solutions: (A) corrected BATINIC 95 and (B) a new solution with an improved S11 πN elastic input. We endorse solution (B).
Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.
Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu
2016-01-01
The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2012-12-01
In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jing-Jy; Flood, Paul E.; LePoire, David
In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less
Raj, Retheep; Sivanandan, K S
2017-01-01
Estimation of elbow dynamics has been the object of numerous investigations. In this work a solution is proposed for estimating elbow movement velocity and elbow joint angle from Surface Electromyography (SEMG) signals. Here the Surface Electromyography signals are acquired from the biceps brachii muscle of human hand. Two time-domain parameters, Integrated EMG (IEMG) and Zero Crossing (ZC), are extracted from the Surface Electromyography signal. The relationship between the time domain parameters, IEMG and ZC with elbow angular displacement and elbow angular velocity during extension and flexion of the elbow are studied. A multiple input-multiple output model is derived for identifying the kinematics of elbow. A Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural network (MLPNN) model is proposed for the estimation of elbow joint angle and elbow angular velocity. The proposed NARX MLPNN model is trained using Levenberg-marquardt based algorithm. The proposed model is estimating the elbow joint angle and elbow movement angular velocity with appreciable accuracy. The model is validated using regression coefficient value (R). The average regression coefficient value (R) obtained for elbow angular displacement prediction is 0.9641 and for the elbow anglular velocity prediction is 0.9347. The Nonlinear Auto Regressive with eXogenous inputs (NARX) structure based multiple layer perceptron neural networks (MLPNN) model can be used for the estimation of angular displacement and movement angular velocity of the elbow with good accuracy.
Nonlinear interferometry approach to photonic sequential logic
NASA Astrophysics Data System (ADS)
Mabuchi, Hideo
2011-10-01
Motivated by rapidly advancing capabilities for extensive nanoscale patterning of optical materials, I propose an approach to implementing photonic sequential logic that exploits circuit-scale phase coherence for efficient realizations of fundamental components such as a NAND-gate-with-fanout and a bistable latch. Kerr-nonlinear optical resonators are utilized in combination with interference effects to drive the binary logic. Quantum-optical input-output models are characterized numerically using design parameters that yield attojoule-scale energy separation between the latch states.
1987-11-01
15407 - 1525,1 m/s Cp = 1900 m/ s , op = 0,2, p= 1.8 L . Cp = 2340 m/ s . Op = 02, p= 1,9 Cp = 3000 m/ s . Op = 02, P = 2,2 Cj = 1500 m/s, oj = 02 Cp...4300 m/ s , op = 005, p = 24 Cc = 2150 m/s, o^ = 01 20 Figure 5. Geoacoustic input parameters for SAFARI model. 0.0 10.0 30.0 30.0 RANGE IKM
Taking the Measure of the Universe: Precision Astrometry with SIM Planetquest (Preprint)
2006-10-09
the orbits of nearby galaxies and groups going out to the distance of the Virgo Cluster . The orbits are in comoving coordinates. This is just a...single solution of a set of several solutions using present 3-d positions as inputs. The four massive objects ( Virgo Cluster , Coma Group, CenA Group, and... Virgo Cluster from a Numerical Action Method calculation with parameters M/L = 90 for spirals and 155 for ellipticals, Ωm = 0.24, ΩΛ = 0.76. The axes are
Computer program for flat sector thrust bearing performance
NASA Technical Reports Server (NTRS)
Presler, A. F.; Etsion, I.
1977-01-01
A versatile computer program is presented which achieves a rapid, numerical solution of the Reynolds equation for a flat sector thrust pad bearing with either compressible or liquid lubricants. Program input includes a range in values of the geometric and operating parameters of the sector bearing. Performance characteristics are obtained from the calculated bearing pressure distribution. These are the load capacity, center of pressure coordinates, frictional energy dissipation, and flow rates of liquid lubricant across the bearing edges. Two sample problems are described.
Critical current and linewidth reduction in spin-torque nano-oscillators by delayed self-injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalsa, Guru, E-mail: guru.khalsa@nist.gov; Stiles, M. D.; Grollier, J.
2015-06-15
Based on theoretical models, the dynamics of spin-torque nano-oscillators can be substantially modified by re-injecting the emitted signal to the input of the oscillator after some delay. Numerical simulations for vortex magnetic tunnel junctions show that with reasonable parameters this approach can decrease critical currents as much as 25% and linewidths by a factor of 4. Analytical calculations, which agree well with simulations, demonstrate that these results can be generalized to any kind of spin-torque oscillator.
Three dimensional flow computations in a turbine scroll
NASA Technical Reports Server (NTRS)
Hamed, A.; Ghantous, C. A.
1982-01-01
The compressible three dimensional inviscid flow in the scroll and vaneless nozzle of radial inflow turbines is analyzed. A FORTRAN computer program for the numerical solution of this complex flow field using the finite element method is presented. The program input consists of the mass flow rate and stagnation conditions at the scroll inlet and of the finite element discretization parameters and nodal coordinates. The output includes the pressure, Mach number and velocity magnitude and direction at all the nodal points.
UQTk Version 3.0.3 User Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh
2017-05-01
The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Design and optimization of input shapers for liquid slosh suppression
NASA Astrophysics Data System (ADS)
Aboel-Hassan, Ameen; Arafa, Mustafa; Nassef, Ashraf
2009-02-01
The need for fast maneuvering and accurate positioning of flexible structures poses a control challenge. The inherent flexibility in these lightly damped systems creates large undesirable residual vibrations in response to rapid excitations. Several control approaches have been proposed to tackle this class of problems, of which the input shaping technique is appealing in many aspects. While input shaping has been widely investigated to attenuate residual vibrations in flexible structures, less attention was granted to expand its viability in further applications. The aim of this work is to develop a methodology for applying input shaping techniques to suppress sloshing effects in open moving containers to facilitate safe and fast point-to-point movements. The liquid behavior is modeled using finite element analysis. The input shaper parameters are optimized to find the commands that would result in minimum residual vibration. Other objectives, such as improved robustness, and motion constraints such as deflection limiting are also addressed in the optimization scheme. Numerical results are verified on an experimental setup consisting of a small motor-driven water tank undergoing rectilinear motion, while measuring both the tank motion and free surface displacement of the water. The results obtained suggest that input shaping is an effective method for liquid slosh suppression.
NASA Astrophysics Data System (ADS)
Karandish, Fatemeh; Šimůnek, Jiří
2016-12-01
Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.
Delta-Isobar Production in the Hard Photodisintegration of a Deuteron
NASA Astrophysics Data System (ADS)
Granados, Carlos; Sargsian, Misak
2010-02-01
Hard photodisintegration of the deuteron in delta-isobar production channels is proposed as a useful process in identifying the quark structure of hadrons and of hadronic interactions at large momentum and energy transfer. The reactions are modeled using the hard re scattering model, HRM, following previous works on hard breakup of a nucleon nucleon (NN) system in light nuclei. Here,quantitative predictions through the HRM require the numerical input of fits of experimental NN hard elastic scattering cross sections. Because of the lack of data in hard NN scattering into δ-isobar channels, the cross section of the corresponding photodisintegration processes cannot be predicted in the same way. Instead, the corresponding NN scattering process is modeled through the quark interchange mechanism, QIM, leaving an unknown normalization parameter. The observables of interest are ratios of differential cross sections of δ-isobar production channels to NN breakup in deuteron photodisintegration. Both entries in these ratios are derived through the HRM and QIM so that normalization parameters cancel out and numerical predictions can be obtained. )
Tidally induced residual current over the Malin Sea continental slope
NASA Astrophysics Data System (ADS)
Stashchuk, Nataliya; Vlasenko, Vasiliy; Hosegood, Phil; Nimmo-Smith, W. Alex M.
2017-05-01
Tidally induced residual currents generated over shelf-slope topography are investigated analytically and numerically using the Massachusetts Institute of Technology general circulation model. Observational support for the presence of such a slope current was recorded over the Malin Sea continental slope during the 88-th cruise of the RRS ;James Cook; in July 2013. A simple analytical formula developed here in the framework of time-averaged shallow water equations has been validated against a fully nonlinear nonhydrostatic numerical solution. A good agreement between analytical and numerical solutions is found for a wide range of input parameters of the tidal flow and bottom topography. In application to the Malin Shelf area both the numerical model and analytical solution predicted a northward moving current confined to the slope with its core located above the 400 m isobath and with vertically averaged maximum velocities up to 8 cm s-1, which is consistent with the in-situ data recorded at three moorings and along cross-slope transects.
Numerical investigation of the staged gasification of wet wood
NASA Astrophysics Data System (ADS)
Donskoi, I. G.; Kozlov, A. N.; Svishchev, D. A.; Shamanskii, V. A.
2017-04-01
Gasification of wooden biomass makes it possible to utilize forestry wastes and agricultural residues for generation of heat and power in isolated small-scale power systems. In spite of the availability of a huge amount of cheap biomass, the implementation of the gasification process is impeded by formation of tar products and poor thermal stability of the process. These factors reduce the competitiveness of gasification as compared with alternative technologies. The use of staged technologies enables certain disadvantages of conventional processes to be avoided. One of the previously proposed staged processes is investigated in this paper. For this purpose, mathematical models were developed for individual stages of the process, such as pyrolysis, pyrolysis gas combustion, and semicoke gasification. The effect of controlling parameters on the efficiency of fuel conversion into combustible gases is studied numerically using these models. For the controlling parameter are selected heat inputted into a pyrolysis reactor, the excess of oxidizer during gas combustion, and the wood moisture content. The process efficiency criterion is the gasification chemical efficiency accounting for the input of external heat (used for fuel drying and pyrolysis). The generated regime diagrams represent the gasification efficiency as a function of controlling parameters. Modeling results demonstrate that an increase in the fraction of heat supplied from an external source can result in an adequate efficiency of the wood gasification through the use of steam generated during drying. There are regions where it is feasible to perform incomplete combustion of the pyrolysis gas prior to the gasification. The calculated chemical efficiency of the staged gasification is as high as 80-85%, which is 10-20% higher that in conventional single-stage processes.
Optimization of porthole die geometrical variables by Taguchi method
NASA Astrophysics Data System (ADS)
Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.
2017-10-01
Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.
NASA Astrophysics Data System (ADS)
Ma, Junhai; Li, Ting; Ren, Wenbo
2017-06-01
This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.
Dual control and prevention of the turn-off phenomenon in a class of mimo systems
NASA Technical Reports Server (NTRS)
Mookerjee, P.; Bar-Shalom, Y.; Molusis, J. A.
1985-01-01
A recently developed methodology of adaptive dual control based upon sensitivity functions is applied here to a multivariable input-output model. The plant has constant but unknown parameters. It represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. The cautious and the new dual controller are examined. In many instances, the cautious controller is seen to turn off. The new dual controller modifies the cautious control design by numerator and denominator correction terms which depend upon the sensitivity functions of the expected future cost and avoids the turn-off and burst phenomena. Monte Carlo simulations and statistical tests of significance indicate the superiority of the dual controller over the cautious and the heuristic certainity equivalence controllers.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
NASA Astrophysics Data System (ADS)
Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus
2017-04-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-01-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.
[Application of numerical convolution in in vivo/in vitro correlation research].
Yue, Peng
2009-01-01
This paper introduced the conception and principle of in vivo/in vitro correlation (IVIVC) and convolution/deconvolution methods, and elucidated in details the convolution strategy and method for calculating the in vivo absorption performance of the pharmaceutics according to the their pharmacokinetic data in Excel, then put the results forward to IVIVC research. Firstly, the pharmacokinetic data ware fitted by mathematical software to make up the lost points. Secondly, the parameters of the optimal fitted input function were defined by trail-and-error method according to the convolution principle in Excel under the hypothesis that all the input functions fit the Weibull functions. Finally, the IVIVC between in vivo input function and the in vitro dissolution was studied. In the examples, not only the application of this method was demonstrated in details but also its simplicity and effectiveness were proved by comparing with the compartment model method and deconvolution method. It showed to be a powerful tool for IVIVC research.
Spatiotemporal Airy Ince-Gaussian wave packets in strongly nonlocal nonlinear media.
Peng, Xi; Zhuang, Jingli; Peng, Yulian; Li, DongDong; Zhang, Liping; Chen, Xingyu; Zhao, Fang; Deng, Dongmei
2018-03-08
The self-accelerating Airy Ince-Gaussian (AiIG) and Airy helical Ince-Gaussian (AihIG) wave packets in strongly nonlocal nonlinear media (SNNM) are obtained by solving the strongly nonlocal nonlinear Schrödinger equation. For the first time, the propagation properties of three dimensional localized AiIG and AihIG breathers and solitons in the SNNM are demonstrated, these spatiotemporal wave packets maintain the self-accelerating and approximately non-dispersion properties in temporal dimension, periodically oscillating (breather state) or steady (soliton state) in spatial dimension. In particular, their numerical experiments of spatial intensity distribution, numerical simulations of spatiotemporal distribution, as well as the transverse energy flow and the angular momentum in SNNM are presented. Typical examples of the obtained solutions are based on the ratio between the input power and the critical power, the ellipticity and the strong nonlocality parameter. The comparisons of analytical solutions with numerical simulations and numerical experiments of the AiIG and AihIG optical solitons show that the numerical results agree well with the analytical solutions in the case of strong nonlocality.
Assessing mental stress from the photoplethysmogram: a numerical study
Charlton, Peter H; Celka, Patrick; Farukh, Bushra; Chowienczyk, Phil; Alastruey, Jordi
2018-01-01
Abstract Objective: Mental stress is detrimental to cardiovascular health, being a risk factor for coronary heart disease and a trigger for cardiac events. However, it is not currently routinely assessed. The aim of this study was to identify features of the photoplethysmogram (PPG) pulse wave which are indicative of mental stress. Approach: A numerical model of pulse wave propagation was used to simulate blood pressure signals, from which simulated PPG pulse waves were estimated using a transfer function. Pulse waves were simulated at six levels of stress by changing the model input parameters both simultaneously and individually, in accordance with haemodynamic changes associated with stress. Thirty-two feature measurements were extracted from pulse waves at three measurement sites: the brachial, radial and temporal arteries. Features which changed significantly with stress were identified using the Mann–Kendall monotonic trend test. Main results: Seventeen features exhibited significant trends with stress in measurements from at least one site. Three features showed significant trends at all three sites: the time from pulse onset to peak, the time from the dicrotic notch to pulse end, and the pulse rate. More features showed significant trends at the radial artery (15) than the brachial (8) or temporal (7) arteries. Most features were influenced by multiple input parameters. Significance: The features identified in this study could be used to monitor stress in healthcare and consumer devices. Measurements at the radial artery may provide superior performance than the brachial or temporal arteries. In vivo studies are required to confirm these observations. PMID:29658894
Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...
2014-01-01
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less
Sustainability of transport structures - some aspects of the nonlinear reliability assessment
NASA Astrophysics Data System (ADS)
Pukl, Radomír; Sajdlová, Tereza; Strauss, Alfred; Lehký, David; Novák, Drahomír
2017-09-01
Efficient techniques for both nonlinear numerical analysis of concrete structures and advanced stochastic simulation methods have been combined in order to offer an advanced tool for assessment of realistic behaviour, failure and safety assessment of transport structures. The utilized approach is based on randomization of the non-linear finite element analysis of the structural models. Degradation aspects such as carbonation of concrete can be accounted in order predict durability of the investigated structure and its sustainability. Results can serve as a rational basis for the performance and sustainability assessment based on advanced nonlinear computer analysis of the structures of transport infrastructure such as bridges or tunnels. In the stochastic simulation the input material parameters obtained from material tests including their randomness and uncertainty are represented as random variables or fields. Appropriate identification of material parameters is crucial for the virtual failure modelling of structures and structural elements. Inverse analysis using artificial neural networks and virtual stochastic simulations approach is applied to determine the fracture mechanical parameters of the structural material and its numerical model. Structural response, reliability and sustainability have been investigated on different types of transport structures made from various materials using the above mentioned methodology and tools.
Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers
NASA Astrophysics Data System (ADS)
Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.
2011-12-01
Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2015-01-01
Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
NASA Astrophysics Data System (ADS)
Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.
2015-12-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less
A Numerical Estimate of The Impact of The Saharan Dust On Medityerranean Trophic Web
NASA Astrophysics Data System (ADS)
Crise, A.; Crispi, G.
A first estimate of the importance of Saharan dust as input of macronutrients on the phytoplankton standing crop concentration and primary production at basin scale is here presented using a three-dimensional numerical model of the Mediterranean Sea. The numerical scheme adopted is a 1/4 degree resolution 31 levels MOM-based eco- hydrodynamical model with climatological ('perpetual year') forcings coupled on-line with a structure including multi-nutrient, size-fractionated phytoplankton functional groups, herbivores and a parametrized recycling detritus submodel, so to (explicitely or implicitely) include the major energy pathways of the upper layer mediterranean ecosystem. This model takes into account as potential limiting factors, among others, Nitrogen (in its oxidized and reduced forms) and Phosphorus. A gridded data setof (wet and dry) dust deposition over Mediterranean derived from SKIRON operational model is used to identify statistically the areas and the duration/intensity of the events. Starting from this averaging process, experiments are carried out to study the dust induced episodes of release of bioavailable phosphorus which is supposed to be the limiting factor in the oligotrophic waters of the surface layer in Med Sea. The metrics for the evaluation of the impact of deposition have been identified in phyto standing crop, primary and export production and switching in the food web functioning. These global parameters, even if cannot exaust the whealth of the informations provided by the model, can help discriminate the sensitivity of food web to the nutrient pulses induced by the deposition. First results of a scenario analysis of typical atmospheric input events, provide evidence of the response of the upper layer ecosystem to assess the sensitivity of the model predictions to the variability to integrated intensity of external input.
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Parameter reduction in nonlinear state-space identification of hysteresis
NASA Astrophysics Data System (ADS)
Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan
2018-05-01
Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.
NASA Astrophysics Data System (ADS)
Majd, Nayereh; Ghasemi, Zahra
2016-10-01
We have investigated a TPTQ state as an input state of a non-ideal ferromagnetic detectors. Minimal spin polarization required to demonstrate spin entanglement according to entanglement witness and CHSH inequality with respect to (w.r.t.) their two free parameters have been found, and we have numerically shown that the entanglement witness is less stringent than the direct tests of Bell's inequality in the form of CHSH in the entangled limits of its free parameters. In addition, the lower limits of spin detection efficiency fulfilling secure cryptographic key against eavesdropping have been derived. Finally, we have considered TPTQ state as an output of spin decoherence channel and the region of ballistic transmission time w.r.t. spin relaxation time and spin dephasing time has been found.
NASA Astrophysics Data System (ADS)
Doummar, Joanna; Kassem, Assaad
2017-04-01
In the framework of a three-year PEER (USAID/NSF) funded project, flow in a Karst system in Lebanon (Assal) dominated by snow and semi arid conditions was simulated and successfully calibrated using an integrated numerical model (MIKE-She 2016) based on high resolution input data and detailed catchment characterization. Point source infiltration and fast flow pathways were simulated by a bypass function and a high conductive lens respectively. The approach consisted of identifying all the factors used in qualitative vulnerability methods (COP, EPIK, PI, DRASTIC, GOD) applied in karst systems and to assess their influence on recharge signals in the different hydrological karst compartments (Atmosphere, Unsaturated zone and Saturated zone) based on the integrated numerical model. These parameters are usually attributed different weights according to their estimated impact on Groundwater vulnerability. The aim of this work is to quantify the importance of each of these parameters and outline parameters that are not accounted for in standard methods, but that might play a role in the vulnerability of a system. The spatial distribution of the detailed evapotranspiration, infiltration, and recharge signals from atmosphere to unsaturated zone to saturated zone was compared and contrasted among different surface settings and under varying flow conditions (e.g., in varying slopes, land cover, precipitation intensity, and soil properties as well point source infiltration). Furthermore a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge and indirectly on vulnerability. The preliminary analysis yields a new methodology that accounts for most of the factors influencing vulnerability while refining the weights attributed to each one of them, based on a quantitative approach.
PLS Road surface temperature forecast for susceptibility of ice occurrence
NASA Astrophysics Data System (ADS)
Marchetti, Mario; Khalifa, Abderrhamen; Bues, Michel
2014-05-01
Winter maintenance relies on many operational tools consisting in monitoring atmospheric and pavement physical parameters. Among them, road weather information systems (RWIS) and thermal mapping are mostly used by service in charge of managing infrastructure networks. The Data from RWIS and thermal mapping are considered as inputs for forecasting physical numerical models, commonly in place since the 80s. These numerical models do need an accurate description of the infrastructure, such as pavement layers and sub-layers, along with many meteorological parameters, such as air temperature and global and infrared radiation. The description is sometimes partially known, and meteorological data is only monitored on specific spot. On the other hand, thermal mapping is now an easy, reliable and cost effective way to monitor road surface temperature (RST), and many meteorological parameters all along routes of infrastructure networks, including with a whole fleet of vehicles in the specific cases of roads, or airports. The technique uses infrared thermometry to measure RST and an atmospheric probes for air temperature, relative humidity, wind speed and global radiation, both at a high resolution interval, to identify sections of the road network prone to ice occurrence. However, measurements are time-consuming, and the data from thermal mapping is one input among others to establish the forecast. The idea was to build a reliable forecast on the sole data from thermal mapping. Previous work has established the interest to use principal component analysis (PCA) on the basis of a reduced number of thermal fingerprints. The work presented here is a focus on the use of partial least-square regression (PLS) to build a RST forecast with air temperature measurements. Roads with various environments, weather conditions (clear, cloudy mainly) and seasons were monitored over several months to generate an appropriate number of samples. The study was conducted to determine the minimum number of samples to get a reliable forecast, considering inputs for numerical models do not exceed five thermal fingerprints. Results of PLS have shown that the PLS model could have a R² of 0.9562, a RMSEP of 1.34 and a bias of -0.66. The same model applied to establish a forecast on past event indicates an average difference between measurements and forecasts of 0.20 °C. The advantage of such approach is its potential application not only to winter events, but also the extreme summer ones for urban heat island.
NASA Technical Reports Server (NTRS)
Hadass, Z.
1974-01-01
The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.
NASA Astrophysics Data System (ADS)
Dang, Van Tuan; Lafon, Pascal; Labergere, Carl
2017-10-01
In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.
NASA Astrophysics Data System (ADS)
Sellami, Takwa; Jelassi, Sana; Darcherif, Abdel Moumen; Berriri, Hanen; Mimouni, Med Faouzi
2018-04-01
With the advancement of wind turbines towards complex structures, the requirement of trusty structural models has become more apparent. Hence, the vibration characteristics of the wind turbine components, like the blades and the tower, have to be extracted under vibration constraints. Although extracting the modal properties of blades is a simple task, calculating precise modal data for the whole wind turbine coupled to its tower/foundation is still a perplexing task. In this framework, this paper focuses on the investigation of the structural modeling approach of modern commercial micro-turbines. Thus, the structural model a complex designed wind turbine, which is Rutland 504, is established based on both experimental and numerical methods. A three-dimensional (3-D) numerical model of the structure was set up based on the finite volume method (FVM) using the academic finite element analysis software ANSYS. To validate the created model, experimental vibration tests were carried out using the vibration test system of TREVISE platform at ECAM-EPMI. The tests were based on the experimental modal analysis (EMA) technique, which is one of the most efficient techniques for identifying structures parameters. Indeed, the poles and residues of the frequency response functions (FRF), between input and output spectra, were calculated to extract the mode shapes and the natural frequencies of the structure. Based on the obtained modal parameters, the numerical designed model was up-dated.
A Computational Methodology for Simulating Thermal Loss Testing of the Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Reid, Terry V.; Wilson, Scott D.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two highefficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including the use of multidimensional numerical models. Validation test hardware has also been used to provide a direct comparison of numerical results and validate the multi-dimensional numerical models used to predict convertor net heat input and efficiency. These validation tests were designed to simulate the temperature profile of an operating Stirling convertor and resulted in a measured net heat input of 244.4 W. The methodology was applied to the multi-dimensional numerical model which resulted in a net heat input of 240.3 W. The computational methodology resulted in a value of net heat input that was 1.7 percent less than that measured during laboratory testing. The resulting computational methodology and results are discussed.
A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.
Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan
2016-03-01
Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.
NASA Astrophysics Data System (ADS)
Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam
2017-09-01
Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.
A reporting protocol for thermochronologic modeling illustrated with data from the Grand Canyon
NASA Astrophysics Data System (ADS)
Flowers, Rebecca M.; Farley, Kenneth A.; Ketcham, Richard A.
2015-12-01
Apatite (U-Th)/He and fission-track dates, as well as 4He/3He and fission-track length data, provide rich thermal history information. However, numerous choices and assumptions are required on the long road from raw data and observations to potentially complex geologic interpretations. This paper outlines a conceptual framework for this path, with the aim of promoting a broader understanding of how thermochronologic conclusions are derived. The tiered structure consists of thermal history model inputs at Level 1, thermal history model outputs at Level 2, and geologic interpretations at Level 3. Because inverse thermal history modeling is at the heart of converting thermochronologic data to interpretation, for others to evaluate and reproduce conclusions derived from thermochronologic results it is necessary to publish all data required for modeling, report all model inputs, and clearly and completely depict model outputs. Here we suggest a generalized template for a model input table with which to arrange, report and explain the choice of inputs to thermal history models. Model inputs include the thermochronologic data, additional geologic information, and system- and model-specific parameters. As an example we show how the origin of discrepant thermochronologic interpretations in the Grand Canyon can be better understood by using this disciplined approach.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.
Mansuori, M; Zareei, G H; Hashemi, H
2015-10-01
We present a numerical method for generation of optical pulse width modulation (PWM) based on tunable reflective interface by using a microfluidic droplet. We demonstrate a single layer, planar, optofluidic PWM switch that is driven by excited alternating microbubbles. The main parameters of generation of this PWM such as frequency and speed of switching can be controlled by the mass flow rates of input fluids, and the shape of plug or droplet. Advantages of this design are the reconfigurability in design and the easy control of the switching parameters. The validation of the proposed design is carried out by employing the finite element method (FEM) for the mechanical simulation and the finite-difference time-domain (FDTD) for the optical simulation.
Model and Data Reduction for Control, Identification and Compressed Sensing
NASA Astrophysics Data System (ADS)
Kramer, Boris
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Soliton propagation in tapered silicon core fibers.
Peacock, Anna C
2010-11-01
Numerical simulations are used to investigate soliton-like propagation in tapered silicon core optical fibers. The simulations are based on a realistic tapered structure with nanoscale core dimensions and a decreasing anomalous dispersion profile to compensate for the effects of linear and nonlinear loss. An intensity misfit parameter is used to establish the optimum taper dimensions that preserve the pulse shape while reducing temporal broadening. Soliton formation from Gaussian input pulses is also observed--further evidence of the potential for tapered silicon fibers to find use in a range of signal processing applications.
Optimization of an integrated wavelength monitor device
NASA Astrophysics Data System (ADS)
Wang, Pengfei; Brambilla, Gilberto; Semenova, Yuliya; Wu, Qiang; Farrell, Gerald
2011-05-01
In this paper an edge filter based on multimode interference in an integrated waveguide is optimized for a wavelength monitoring application. This can also be used as a demodulation element in a fibre Bragg grating sensing system. A global optimization algorithm is presented for the optimum design of the multimode interference device, including a range of parameters of the multimode waveguide, such as length, width and position of the input and output waveguides. The designed structure demonstrates the desired spectral response for wavelength measurements. Fabrication tolerance is also analysed numerically for this structure.
Spheromak reactor-design study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Les, J.M.
1981-06-30
A general overview of spheromak reactor characteristics, such as MHD stability, start up, and plasma geometry is presented. In addition, comparisons are made between spheromaks, tokamaks and field reversed mirrors. The computer code Sphero is also discussed. Sphero is a zero dimensional time independent transport code that uses particle confinement times and profile parameters as input since they are not known with certainty at the present time. More specifically, Sphero numerically solves a given set of transport equations whose solutions include such variables as fuel ion (deuterium and tritium) density, electron density, alpha particle density and ion, electron temperatures.
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
An enhanced velocity-based algorithm for safe implementations of gain-scheduled controllers
NASA Astrophysics Data System (ADS)
Lhachemi, H.; Saussié, D.; Zhu, G.
2017-09-01
This paper presents an enhanced velocity-based algorithm to implement gain-scheduled controllers for nonlinear and parameter-dependent systems. A new scheme including pre- and post-filtering is proposed with the assumption that the time-derivative of the controller inputs is not available for feedback control. It is shown that the proposed control structure can preserve the input-output properties of the linearised closed-loop system in the neighbourhood of each equilibrium point, avoiding the emergence of the so-called hidden coupling terms. Moreover, it is guaranteed that this implementation will not introduce unobservable or uncontrollable unstable modes, and hence the internal stability will not be affected. A case study dealing with the design of a pitch-axis missile autopilot is carried out and the numerical simulation results confirm the validity of the proposed approach.
Propagation of hypergeometric Gaussian beams in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Tang, Bin; Bian, Lirong; Zhou, Xin; Chen, Kai
2018-01-01
Optical vortex beams have attracted lots of interest due to its potential application in image processing, optical trapping and optical communications, etc. In this work, we theoretically and numerically investigated the propagation properties of hypergeometric Gaussian (HyGG) beams in strongly nonlocal nonlinear media. Based on the Snyder-Mitchell model, analytical expressions for propagation of the HyGG beams in strongly nonlocal nonlinear media were obtained. The influence of input power and optical parameters on the evolutions of the beam width and radius of curvature is illustrated, respectively. The results show that the beam width and radius of curvature of the HyGG beams remain invariant, like a soliton when the input power is equal to the critical power. Otherwise, it varies periodically like a breather, which is the result of competition between the beam diffraction and nonlinearity of the medium.
A Simple and Accurate Rate-Driven Infiltration Model
NASA Astrophysics Data System (ADS)
Cui, G.; Zhu, J.
2017-12-01
In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.
Eigenvalue assignment by minimal state-feedback gain in LTI multivariable systems
NASA Astrophysics Data System (ADS)
Ataei, Mohammad; Enshaee, Ali
2011-12-01
In this article, an improved method for eigenvalue assignment via state feedback in the linear time-invariant multivariable systems is proposed. This method is based on elementary similarity operations, and involves mainly utilisation of vector companion forms, and thus is very simple and easy to implement on a digital computer. In addition to the controllable systems, the proposed method can be applied for the stabilisable ones and also systems with linearly dependent inputs. Moreover, two types of state-feedback gain matrices can be achieved by this method: (1) the numerical one, which is unique, and (2) the parametric one, in which its parameters are determined in order to achieve a gain matrix with minimum Frobenius norm. The numerical examples are presented to demonstrate the advantages of the proposed method.
XUV coherent diffraction imaging in reflection geometry with low numerical aperture.
Zürch, Michael; Kern, Christian; Spielmann, Christian
2013-09-09
We present an experimental realization of coherent diffraction imaging in reflection geometry illuminating the sample with a laser driven high harmonic generation (HHG) based XUV source. After recording the diffraction pattern in reflection geometry, the data must be corrected before the image can be reconstructed with a hybrid-input-output (HIO) algorithm. In this paper we present a detailed investigation of sources of spoiling the reconstructed image due to the nonlinear momentum transfer, errors in estimating the angle of incidence on the sample, and distortions by placing the image off center in the computation grid. Finally we provide guidelines for the necessary parameters to realize a satisfactory reconstruction within a spatial resolution in the range of one micron for an imaging scheme with a numerical aperture NA < 0.03.
Some issues and subtleties in numerical simulation of X-ray FEL's
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fawley, William M.
Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less
Robust fault-tolerant tracking control design for spacecraft under control input saturation.
Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini
2014-07-01
In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-01-01
This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure
NASA Astrophysics Data System (ADS)
Ciandrini, L.; Maffi, C.; Motta, A.; Bassetti, B.; Cosentino Lagomarsino, M.
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter γ . We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying γ , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Feedback topology and XOR-dynamics in Boolean networks with varying input structure.
Ciandrini, L; Maffi, C; Motta, A; Bassetti, B; Cosentino Lagomarsino, M
2009-08-01
We analyze a model of fixed in-degree random Boolean networks in which the fraction of input-receiving nodes is controlled by the parameter gamma. We investigate analytically and numerically the dynamics of graphs under a parallel XOR updating scheme. This scheme is interesting because it is accessible analytically and its phenomenology is at the same time under control and as rich as the one of general Boolean networks. We give analytical formulas for the dynamics on general graphs, showing that with a XOR-type evolution rule, dynamic features are direct consequences of the topological feedback structure, in analogy with the role of relevant components in Kauffman networks. Considering graphs with fixed in-degree, we characterize analytically and numerically the feedback regions using graph decimation algorithms (Leaf Removal). With varying gamma , this graph ensemble shows a phase transition that separates a treelike graph region from one in which feedback components emerge. Networks near the transition point have feedback components made of disjoint loops, in which each node has exactly one incoming and one outgoing link. Using this fact, we provide analytical estimates of the maximum period starting from topological considerations.
Nevers, M.B.; Whitman, R.L.; Frick, W.E.; Ge, Z.
2007-01-01
The impact of river outfalls on beach water quality depends on numerous interacting factors. The delivery of contaminants by multiple creeks greatly complicates understanding of the source contributions, especially when pollution might originate up- or down-coast of beaches. We studied two beaches along Lake Michigan that are located between two creek outfalls to determine the hydrometeorologic factors influencing near-shore microbiologic water quality and the relative impact of the creeks. The creeks continuously delivered water with high concentrations of Escherichia coli to Lake Michigan, and the direction of transport of these bacteria was affected by current direction. Current direction reversals were associated with elevated E. coli concentrations at Central Avenue beach. Rainfall, barometric pressure, wave height, wave period, and creek specific conductance were significantly related to E. coli concentration at the beaches and were the parameters used in predictive models that best described E. coli variation at the two beaches. Multiple inputs to numerous beaches complicates the analysis and understanding of the relative relationship of sources but affords opportunities for showing how these complex creek inputs might interact to yield collective or individual effects on beach water quality.
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
NASA Astrophysics Data System (ADS)
Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.
2000-12-01
Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
NASA Astrophysics Data System (ADS)
Djoko, Martin; Kofane, T. C.
2018-06-01
We investigate the propagation characteristics and stabilization of generalized-Gaussian pulse in highly nonlinear homogeneous media with higher-order dispersion terms. The optical pulse propagation has been modeled by the higher-order (3+1)-dimensional cubic-quintic-septic complex Ginzburg-Landau [(3+1)D CQS-CGL] equation. We have used the variational method to find a set of differential equations characterizing the variation of the pulse parameters in fiber optic-links. The variational equations we obtained have been integrated numerically by the means of the fourth-order Runge-Kutta (RK4) method, which also allows us to investigate the evolution of the generalized-Gaussian beam and the pulse evolution along an optical doped fiber. Then, we have solved the original nonlinear (3+1)D CQS-CGL equation with the split-step Fourier method (SSFM), and compare the results with those obtained, using the variational approach. A good agreement between analytical and numerical methods is observed. The evolution of the generalized-Gaussian beam has shown oscillatory propagation, and bell-shaped dissipative optical bullets have been obtained under certain parameter values in both anomalous and normal chromatic dispersion regimes. Using the natural control parameter of the solution as it evolves, named the total energy Q, our numerical simulations reveal the existence of 3D stable vortex dissipative light bullets, 3D stable spatiotemporal optical soliton, stationary and pulsating optical bullets, depending on the used initial input condition (symmetric or elliptic).
NASA Astrophysics Data System (ADS)
Gaponenko, A. M.; Kagramanova, A. A.
2017-11-01
The opportunity of application of Stirling engine with non-conventional and renewable sources of energy. The advantage of such use. The resulting expression for the thermal efficiency of the Stirling engine. It is shown that the work per cycle is proportional to the quantity of matter, and hence the pressure of the working fluid, the temperature difference and, to a lesser extent, depends on the expansion coefficient; efficiency of ideal Stirling cycle coincides with the efficiency of an ideal engine working on the Carnot cycle, which distinguishes a Stirling cycle from the cycles of Otto and Diesel underlying engine. It has been established that the four input parameters, the only parameter which can be easily changed during operation, and which effectively affects the operation of the engine is the phase difference. Dependence of work per cycle of the phase difference, called the phase characteristic, visually illustrates mode of operation of Stirling engine. The mathematical model of the cycle of Schmidt and the analysis of operation of Stirling engine in the approach of Schmidt with the aid of numerical analysis. To conduct numerical experiments designed program feature in the language MathLab. The results of numerical experiments are illustrated by graphical charts.
Numerical modeling of thermal regime in inland water bodies with field measurement data
NASA Astrophysics Data System (ADS)
Gladskikh, D.; Sergeev, D.; Baydakov, G.; Soustova, I.; Troitskaya, Yu.
2018-01-01
Modification of the program complex LAKE, which is intended to compute the thermal regimes of inland water bodies, and the results of its validation in accordance with the parameters of lake part of Gorky water reservoir are reviewed in the research. The modification caused changing the procedure of input temperature profile assignment and parameterization of surface stress on air-water boundary in accordance with the consideration of wind influence on mixing process. Also the innovation consists in combined methods of gathering meteorological parameters from files of global meteorological reanalysis and data of hydrometeorological station. Temperature profiles carried out with CTD-probe during expeditions in the period 2014-2017 were used for validation of the model. The comparison between the real data and the numerical results and its assessment based on time and temperature dependences in control points, correspondence of the forms of the profiles and standard deviation for all performed realizations are provided. It is demonstrated that the model reproduces the results of field measurement data for all observed conditions and seasons. The numerical results for the regimes with strong mixing are in the best quantitative and qualitative agreement with the real profiles. The accuracy of the forecast for the ones with strong stratification near the surface is lower but all specificities of the forms are correctly reproduced.
Artificial neural networks in knee injury risk evaluation among professional football players
NASA Astrophysics Data System (ADS)
Martyna, Michałowska; Tomasz, Walczak; Krzysztof, Grabski Jakub; Monika, Grygorowicz
2018-01-01
Lower limb injury risk assessment was proposed, based on isokinetic examination that is a part of standard athlete's biomechanical evaluation performed mainly twice a year. Information about non-contact knee injury (or lack of the injury) sustained within twelve months after isokinetic test, confirmed in USG were verified. Three the most common types of football injuries were taken into consideration: anterior cruciate ligament (ACL) rupture, hamstring and quadriceps muscles injuries. 22 parameters, obtained from isokinetic tests were divided into 4 groups and used as input parameters of five feedforward artificial neural networks (ANNs). The 5th group consisted of all considered parameters. The networks were trained with the use of Levenberg-Marquardt backpropagation algorithm to return value close to 1 for the sets of parameters corresponding injury event and close to 0 for parameters with no injury recorded within 6 - 12 months after isokinetic test. Results of this study shows that ANN might be useful tools, which simplify process of simultaneous interpretation of many numerical parameters, but the most important factor that significantly influence the results is database used for ANN training.
Identification of observer/Kalman filter Markov parameters: Theory and experiments
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.
1991-01-01
An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on an ten-bay truss structure.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei
NASA Astrophysics Data System (ADS)
Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David
2016-03-01
KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.
NASA Astrophysics Data System (ADS)
Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.
2014-02-01
Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.
Predictive model for convective flows induced by surface reactivity contrast
NASA Astrophysics Data System (ADS)
Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali
2018-05-01
Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.
Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software
NASA Technical Reports Server (NTRS)
Kleb, Bil
2008-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
An approach to achieve progress in spacecraft shielding
NASA Astrophysics Data System (ADS)
Thoma, K.; Schäfer, F.; Hiermaier, S.; Schneider, E.
2004-01-01
Progress in shield design against space debris can be achieved only when a combined approach based on several tools is used. This approach depends on the combined application of advanced numerical methods, specific material models and experimental determination of input parameters for these models. Examples of experimental methods for material characterization are given, covering the range from quasi static to very high strain rates for materials like Nextel and carbon fiber-reinforced materials. Mesh free numerical methods have extraordinary capabilities in the simulation of extreme material behaviour including complete failure with phase changes, combined with shock wave phenomena and the interaction with structural components. In this paper the benefits from combining numerical methods, material modelling and detailed experimental studies for shield design are demonstrated. The following examples are given: (1) Development of a material model for Nextel and Kevlar-Epoxy to enable numerical simulation of hypervelocity impacts on complex heavy protection shields for the International Space Station. (2) The influence of projectile shape on protection performance of Whipple Shields and how experimental problems in accelerating such shapes can be overcome by systematic numerical simulation. (3) The benefits of using metallic foams in "sandwich bumper shields" for spacecraft and how to approach systematic characterization of such materials.
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
NASA Astrophysics Data System (ADS)
Bhattachryya, Arunava; Kumar Gayen, Dilip; Chattopadhyay, Tanay
2013-04-01
All-optical 4-bit binary to binary coded decimal (BCD) converter has been proposed and described, with the help of semiconductor optical amplifier (SOA)-assisted Sagnac interferometric switches in this manuscript. The paper describes all-optical conversion scheme using a set of all-optical switches. BCD is common in computer systems that display numeric values, especially in those consisting solely of digital logic with no microprocessor. In many personal computers, the basic input/output system (BIOS) keep the date and time in BCD format. The operations of the circuit are studied theoretically and analyzed through numerical simulations. The model accounts for the SOA small signal gain, line-width enhancement factor and carrier lifetime, the switching pulse energy and width, and the Sagnac loop asymmetry. By undertaking a detailed numerical simulation the influence of these key parameters on the metrics that determine the quality of switching is thoroughly investigated.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
NASA Astrophysics Data System (ADS)
Luminari, Nicola; Airiau, Christophe; Bottaro, Alessandro
2017-11-01
In the description of the homogenized flow through a porous medium saturated by a fluid, the apparent permeability tensor is one of the most important parameters to evaluate. In this work we compute numerically the apparent permeability tensor for a 3D porous medium constituted by rigid cylinder using the VANS (Volume-Averaged Navier-Stokes) theory. Such a tensor varies with the Reynolds number, the mean pressure gradient orientation and the porosity. A database is created exploring the space of the above parameters. Including the two Euler angles that define the mean pressure gradient is extremely important to capture well possible 3D effects. Based on the database, a kriging interpolation metamodel is used to obtain an estimate of all the tensor components for any input parameters. Preliminary results of the flow in a porous channel based on the metamodel and the VANS closure are shown; the use of such a reduced order model together with a numerical code based on the equations at the macroscopic scale permit to maintain the computational times to within reasonable levels. The authors acknowledge the IDEX Foundation of the University of Toulouse 570 for the financial support Granted to the last author under the project Attractivity Chairs.
Experimental Optimization of a Free-to-Rotate Wing for Small UAS
NASA Technical Reports Server (NTRS)
Logan, Michael J.; DeLoach, Richard; Copeland, Tiwana; Vo, Steven
2014-01-01
This paper discusses an experimental investigation conducted to optimize a free-to-rotate wing for use on a small unmanned aircraft system (UAS). Although free-to-rotate wings have been used for decades on various small UAS and small manned aircraft, little is known about how to optimize these unusual wings for a specific application. The paper discusses some of the design rationale of the basic wing. In addition, three main parameters were selected for "optimization", wing camber, wing pivot location, and wing center of gravity (c.g.) location. A small apparatus was constructed to enable some simple experimental analysis of these parameters. A design-of-experiment series of tests were first conducted to discern which of the main optimization parameters were most likely to have the greatest impact on the outputs of interest, namely, some measure of "stability", some measure of the lift being generated at the neutral position, and how quickly the wing "recovers" from an upset. A second set of tests were conducted to develop a response-surface numerical representation of these outputs as functions of the three primary inputs. The response surface numerical representations are then used to develop an "optimum" within the trade space investigated. The results of the optimization are then tested experimentally to validate the predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K
2015-01-01
Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
NASA Astrophysics Data System (ADS)
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
Geochemical Data Package for Performance Assessment Calculations Related to the Savannah River Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, Daniel I.
The Savannah River Site (SRS) disposes of low-level radioactive waste (LLW) and stabilizes high-level radioactive waste (HLW) tanks in the subsurface environment. Calculations used to establish the radiological limits of these facilities are referred to as Performance Assessments (PA), Special Analyses (SA), and Composite Analyses (CA). The objective of this document is to revise existing geochemical input values used for these calculations. This work builds on earlier compilations of geochemical data (2007, 2010), referred to a geochemical data packages. This work is being conducted as part of the on-going maintenance program of the SRS PA programs that periodically updates calculationsmore » and data packages when new information becomes available. Because application of values without full understanding of their original purpose may lead to misuse, this document also provides the geochemical conceptual model, the approach used for selecting the values, the justification for selecting data, and the assumptions made to assure that the conceptual and numerical geochemical models are reasonably conservative (i.e., bias the recommended input values to reflect conditions that will tend to predict the maximum risk to the hypothetical recipient). This document provides 1088 input parameters for geochemical parameters describing transport processes for 64 elements (>740 radioisotopes) potentially occurring within eight subsurface disposal or tank closure areas: Slit Trenches (ST), Engineered Trenches (ET), Low Activity Waste Vault (LAWV), Intermediate Level (ILV) Vaults, Naval Reactor Component Disposal Areas (NRCDA), Components-in-Grout (CIG) Trenches, Saltstone Facility, and Closed Liquid Waste Tanks. The geochemical parameters described here are the distribution coefficient, Kd value, apparent solubility concentration, k s value, and the cementitious leachate impact factor.« less
Using Natural Language to Enhance Mission Effectiveness
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Meszaros, Erica
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.
Adaptive MPC based on MIMO ARX-Laguerre model.
Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais
2017-03-01
This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Pretest Predictions for Ventilation Tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Sun; H. Yang; H.N. Kalia
The objective of this calculation is to predict the temperatures of the ventilating air, waste package surface, concrete pipe walls, and insulation that will be developed during the ventilation tests involving various test conditions. The results will be used as input to the following three areas: (1) Decisions regarding testing set-up and performance. (2) Assessing how best to scale the test phenomena measured. (3) Validating numerical approach for modeling continuous ventilation. The scope of the calculation is to identify the physical mechanisms and parameters related to thermal response in the ventilation tests, and develop and describe numerical methods that canmore » be used to calculate the effects of continuous ventilation. Sensitivity studies to assess the impact of variation of linear power densities (linear heat loads) and ventilation air flow rates are included. The calculation is limited to thermal effect only.« less
Bar piezoelectric ceramic transformers.
Erhart, Jiří; Pulpan, Půlpán; Rusin, Luboš
2013-07-01
Bar-shaped piezoelectric ceramic transformers (PTs) working in the longitudinal vibration mode (k31 mode) were studied. Two types of the transformer were designed--one with the electrode divided into two segments of different length, and one with the electrodes divided into three symmetrical segments. Parameters of studied transformers such as efficiency, transformation ratio, and input and output impedances were measured. An analytical model was developed for PT parameter calculation for both two- and three-segment PTs. Neither type of bar PT exhibited very high efficiency (maximum 72% for three-segment PT design) at a relatively high transformation ratio (it is 4 for two-segment PT and 2 for three-segment PT at the fundamental resonance mode). The optimum resistive loads were 20 and 10 kΩ for two- and three-segment PT designs for the fundamental resonance, respectively, and about one order of magnitude smaller for the higher overtone (i.e., 2 kΩ and 500 Ω, respectively). The no-load transformation ratio was less than 27 (maximum for two-segment electrode PT design). The optimum input electrode aspect ratios (0.48 for three-segment PT and 0.63 for two-segment PT) were calculated numerically under no-load conditions.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K
NASA Astrophysics Data System (ADS)
Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang
2016-12-01
This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.
A numerical solution for the diffusion equation in hydrogeologic systems
Ishii, A.L.; Healy, R.W.; Striegl, Robert G.
1989-01-01
The documentation of a computer code for the numerical solution of the linear diffusion equation in one or two dimensions in Cartesian or cylindrical coordinates is presented. Applications of the program include molecular diffusion, heat conduction, and fluid flow in confined systems. The flow media may be anisotropic and heterogeneous. The model is formulated by replacing the continuous linear diffusion equation by discrete finite-difference approximations at each node in a block-centered grid. The resulting matrix equation is solved by the method of preconditioned conjugate gradients. The conjugate gradient method does not require the estimation of iteration parameters and is guaranteed convergent in the absence of rounding error. The matrixes are preconditioned to decrease the steps to convergence. The model allows the specification of any number of boundary conditions for any number of stress periods, and the output of a summary table for selected nodes showing flux and the concentration of the flux quantity for each time step. The model is written in a modular format for ease of modification. The model was verified by comparison of numerical and analytical solutions for cases of molecular diffusion, two-dimensional heat transfer, and axisymmetric radial saturated fluid flow. Application of the model to a hypothetical two-dimensional field situation of gas diffusion in the unsaturated zone is demonstrated. The input and output files are included as a check on program installation. The definition of variables, input requirements, flow chart, and program listing are included in the attachments. (USGS)
Application of lab derived kinetic biodegradation parameters at the field scale
NASA Astrophysics Data System (ADS)
Schirmer, M.; Barker, J. F.; Butler, B. J.; Frind, E. O.
2003-04-01
Estimating the intrinsic remediation potential of an aquifer typically requires the accurate assessment of the biodegradation kinetics, the level of available electron acceptors and the flow field. Zero- and first-order degradation rates derived at the laboratory scale generally overpredict the rate of biodegradation when applied to the field scale, because limited electron acceptor availability and microbial growth are typically not considered. On the other hand, field estimated zero- and first-order rates are often not suitable to forecast plume development because they may be an oversimplification of the processes at the field scale and ignore several key processes, phenomena and characteristics of the aquifer. This study uses the numerical model BIO3D to link the laboratory and field scale by applying laboratory derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at Canadian Forces Base (CFB) Borden. All additional input parameters were derived from laboratory and field measurements or taken from the literature. The simulated results match the experimental results reasonably well without having to calibrate the model. An extensive sensitivity analysis was performed to estimate the influence of the most uncertain input parameters and to define the key controlling factors at the field scale. It is shown that the most uncertain input parameters have only a minor influence on the simulation results. Furthermore it is shown that the flow field, the amount of electron acceptor (oxygen) available and the Monod kinetic parameters have a significant influence on the simulated results. Under the field conditions modelled and the assumptions made for the simulations, it can be concluded that laboratory derived Monod kinetic parameters can adequately describe field scale degradation processes, if all controlling factors are incorporated in the field scale modelling that are not necessarily observed at the lab scale. In this way, there are no scale relationships to be found that link the laboratory and the field scale, accurately incorporating the additional processes, phenomena and characteristics, such as a) advective and dispersive transport of one or more contaminants, b) advective and dispersive transport and availability of electron acceptors, c) mass transfer limitations and d) spatial heterogeneities, at the larger scale and applying well defined lab scale parameters should accurately describe field scale processes.
Parental numeric language input to Mandarin Chinese and English speaking preschool children.
Chang, Alicia; Sandhofer, Catherine M; Adelchanow, Lauren; Rottman, Benjamin
2011-03-01
The present study examined the number-specific parental language input to Mandarin- and English-speaking preschool-aged children. Mandarin and English transcripts from the CHILDES database were examined for amount of numeric speech, specific types of numeric speech and syntactic frames in which numeric speech appeared. The results showed that Mandarin-speaking parents talked about number more frequently than English-speaking parents. Further, the ways in which parents talked about number terms in the two languages was more supportive of a cardinal interpretation in Mandarin than in English. We discuss these results in terms of their implications for numerical understanding and later mathematical performance.
NASA Astrophysics Data System (ADS)
Srivastava, Y.; Srivastava, S.; Boriwal, L.
2016-09-01
Mechanical alloying is a novelistic solid state process that has received considerable attention due to many advantages over other conventional processes. In the present work, Co2FeAl healer alloy powder, prepared successfully from premix basic powders of Cobalt (Co), Iron (Fe) and Aluminum (Al) in stoichiometric of 60Co-26Fe-14Al (weight %) by novelistic mechano-chemical route. Magnetic properties of mechanically alloyed powders were characterized by vibrating sample magnetometer (VSM). 2 factor 5 level design matrix was applied to experiment process. Experimental results were used for response surface methodology. Interaction between the input process parameters and the response has been established with the help of regression analysis. Further analysis of variance technique was applied to check the adequacy of developed model and significance of process parameters. Test case study was performed with those parameters, which was not selected for main experimentation but range was same. Response surface methodology, the process parameters must be optimized to obtain improved magnetic properties. Further optimum process parameters were identified using numerical and graphical optimization techniques.
Zhu, Feng; Kalra, Anil; Saif, Tal; Yang, Zaihan; Yang, King H; King, Albert I
2016-01-01
Traumatic brain injury due to primary blast loading has become a signature injury in recent military conflicts and terrorist activities. Extensive experimental and computational investigations have been conducted to study the interrelationships between intracranial pressure response and intrinsic or 'input' parameters such as the head geometry and loading conditions. However, these relationships are very complicated and are usually implicit and 'hidden' in a large amount of simulation/test data. In this study, a data mining method is proposed to explore such underlying information from the numerical simulation results. The heads of different species are described as a highly simplified two-part (skull and brain) finite element model with varying geometric parameters. The parameters considered include peak incident pressure, skull thickness, brain radius and snout length. Their interrelationship and coupling effect are discovered by developing a decision tree based on the large simulation data-set. The results show that the proposed data-driven method is superior to the conventional linear regression method and is comparable to the nonlinear regression method. Considering its capability of exploring implicit information and the relatively simple relationships between response and input variables, the data mining method is considered to be a good tool for an in-depth understanding of the mechanisms of blast-induced brain injury. As a general method, this approach can also be applied to other nonlinear complex biomechanical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, W.
Building something which could be called {open_quotes}virtual reality{close_quotes} (VR) is something of a challenge, particularly when nobody really seems to agree on a definition of VR. The author wanted to combine scientific visualization with VR, resulting in an environment useful for assisting scientific research. He demonstrates the combination of VR and scientific visualization in a prototype application. The VR application constructed consists of a dataflow based system for performing scientific visualization (AVS), extensions to the system to support VR input devices and a numerical simulation ported into the dataflow environment. The VR system includes two inexpensive, off-the-shelf VR devices andmore » some custom code. A working system was assembled with about two man-months of effort. The system allows the user to specify parameters for a chemical flooding simulation as well as some viewing parameters using VR input devices, as well as view the output using VR output devices. In chemical flooding, there is a subsurface region that contains chemicals which are to be removed. Secondary oil recovery and environmental remediation are typical applications of chemical flooding. The process assumes one or more injection wells, and one or more production wells. Chemicals or water are pumped into the ground, mobilizing and displacing hydrocarbons or contaminants. The placement of the production and injection wells, and other parameters of the wells, are the most important variables in the simulation.« less
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Atom based grain extraction and measurement of geometric properties
NASA Astrophysics Data System (ADS)
Martine La Boissonière, Gabriel; Choksi, Rustum
2018-04-01
We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.
Extension, validation and application of the NASCAP code
NASA Technical Reports Server (NTRS)
Katz, I.; Cassidy, J. J., III; Mandell, M. J.; Schnuelle, G. W.; Steen, P. G.; Parks, D. E.; Rotenberg, M.; Alexander, J. H.
1979-01-01
Numerous extensions were made in the NASCAP code. They fall into three categories: a greater range of definable objects, a more sophisticated computational model, and simplified code structure and usage. An important validation of NASCAP was performed using a new two dimensional computer code (TWOD). An interactive code (MATCHG) was written to compare material parameter inputs with charging results. The first major application of NASCAP was performed on the SCATHA satellite. Shadowing and charging calculation were completed. NASCAP was installed at the Air Force Geophysics Laboratory, where researchers plan to use it to interpret SCATHA data.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Numerical and experimental results on the spectral wave transfer in finite depth
NASA Astrophysics Data System (ADS)
Benassai, Guido
2016-04-01
Determination of the form of the one-dimensional surface gravity wave spectrum in water of finite depth is important for many scientific and engineering applications. Spectral parameters of deep water and intermediate depth waves serve as input data for the design of all coastal structures and for the description of many coastal processes. Moreover, the wave spectra are given as an input for the response and seakeeping calculations of high speed vessels in extreme sea conditions and for reliable calculations of the amount of energy to be extracted by wave energy converters (WEC). Available data on finite depth spectral form is generally extrapolated from parametric forms applicable in deep water (e.g., JONSWAP) [Hasselmann et al., 1973; Mitsuyasu et al., 1980; Kahma, 1981; Donelan et al., 1992; Zakharov, 2005). The present paper gives a contribution in this field through the validation of the offshore energy spectra transfer from given spectral forms through the measurement of inshore wave heights and spectra. The wave spectra on deep water were recorded offshore Ponza by the Wave Measurement Network (Piscopia et al.,2002). The field regressions between the spectral parameters, fp and the nondimensional energy with the fetch length were evaluated for fetch-limited sea conditions. These regressions gave the values of the spectral parameters for the site of interest. The offshore wave spectra were transfered from the measurement station offshore Ponza to a site located offshore the Gulf of Salerno. The offshore local wave spectra so obtained were transfered on the coastline with the TMA model (Bouws et al., 1985). Finally the numerical results, in terms of significant wave heights, were compared with the wave data recorded by a meteo-oceanographic station owned by Naples Hydrographic Office on the coastline of Salerno in 9m depth. Some considerations about the wave energy to be potentially extracted by Wave Energy Converters were done and the results were discussed.
On numerical reconstructions of lithographic masks in DUV scatterometry
NASA Astrophysics Data System (ADS)
Henn, M.-A.; Model, R.; Bär, M.; Wurm, M.; Bodermann, B.; Rathsfeld, A.; Gross, H.
2009-06-01
The solution of the inverse problem in scatterometry employing deep ultraviolet light (DUV) is discussed, i.e. we consider the determination of periodic surface structures from light diffraction patterns. With decreasing dimensions of the structures on photo lithography masks and wafers, increasing demands on the required metrology techniques arise. Scatterometry as a non-imaging indirect optical method is applied to periodic line structures in order to determine the sidewall angles, heights, and critical dimensions (CD), i.e., the top and bottom widths. The latter quantities are typically in the range of tens of nanometers. All these angles, heights, and CDs are the fundamental figures in order to evaluate the quality of the manufacturing process. To measure those quantities a DUV scatterometer is used, which typically operates at a wavelength of 193 nm. The diffraction of light by periodic 2D structures can be simulated using the finite element method for the Helmholtz equation. The corresponding inverse problem seeks to reconstruct the grating geometry from measured diffraction patterns. Fixing the class of gratings and the set of measurements, this inverse problem reduces to a finite dimensional nonlinear operator equation. Reformulating the problem as an optimization problem, a vast number of numerical schemes can be applied. Our tool is a sequential quadratic programing (SQP) variant of the Gauss-Newton iteration. In a first step, in which we use a simulated data set, we investigate how accurate the geometrical parameters of an EUV mask can be reconstructed, using light in the DUV range. We then determine the expected uncertainties of geometric parameters by reconstructing from simulated input data perturbed by noise representing the estimated uncertainties of input data. In the last step, we use the measurement data obtained from the new DUV scatterometer at PTB to determine the geometrical parameters of a typical EUV mask with our reconstruction algorithm. The results are compared to the outcome of investigations with two alternative methods namely EUV scatterometry and SEM measurements.
Reconstruction of nonlinear wave propagation
Fleischer, Jason W; Barsi, Christopher; Wan, Wenjie
2013-04-23
Disclosed are systems and methods for characterizing a nonlinear propagation environment by numerically propagating a measured output waveform resulting from a known input waveform. The numerical propagation reconstructs the input waveform, and in the process, the nonlinear environment is characterized. In certain embodiments, knowledge of the characterized nonlinear environment facilitates determination of an unknown input based on a measured output. Similarly, knowledge of the characterized nonlinear environment also facilitates formation of a desired output based on a configurable input. In both situations, the input thus characterized and the output thus obtained include features that would normally be lost in linear propagations. Such features can include evanescent waves and peripheral waves, such that an image thus obtained are inherently wide-angle, farfield form of microscopy.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry V.; Schifer, Nicholas A.; Briggs, Maxwell H.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. Microporous bulk insulation is used in the ground support test hardware to minimize the loss of thermal energy from the electric heat source to the environment. The insulation package is characterized before operation to predict how much heat will be absorbed by the convertor and how much will be lost to the environment during operation. In an effort to validate these predictions, numerous tasks have been performed, which provided a more accurate value for net heat input into the ASCs. This test and modeling effort included: (a) making thermophysical property measurements of test setup materials to provide inputs to the numerical models, (b) acquiring additional test data that was collected during convertor tests to provide numerical models with temperature profiles of the test setup via thermocouple and infrared measurements, (c) using multidimensional numerical models (computational fluid dynamics code) to predict net heat input of an operating convertor, and (d) using validation test hardware to provide direct comparison of numerical results and validate the multidimensional numerical models used to predict convertor net heat input. This effort produced high fidelity ASC net heat input predictions, which were successfully validated using specially designed test hardware enabling measurement of heat transferred through a simulated Stirling cycle. The overall effort and results are discussed.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Empirical and numerical investigation of mass movements - data fusion and analysis
NASA Astrophysics Data System (ADS)
Schmalz, Thilo; Eichhorn, Andreas; Buhl, Volker; Tinkhof, Kurt Mair Am; Preh, Alexander; Tentschert, Ewald-Hans; Zangerl, Christian
2010-05-01
Increasing settlement activities of people in mountanious regions and the appearance of extreme climatic conditions motivate the investigation of landslides. Within the last few years a significant rising of disastrous slides could be registered which generated a broad public interest and the request for security measures. The FWF (Austrian Science Fund) funded project ‘KASIP' (Knowledge-based Alarm System with Identified Deformation Predictor) deals with the development of a new type of alarm system based on calibrated numerical slope models for the realistic calculation of failure scenarios. In KASIP, calibration is the optimal adaptation of a numerical model to available monitoring data by least-squares techniques (e.g. adaptive Kalman-filtering). Adaptation means the determination of a priori uncertain physical parameters like the strength of the geological structure. The object of our studies in KASIP is the landslide ‘Steinlehnen' near Innsbruck (Northern Tyrol, Austria). The first part of the presentation is focussed on the determination of geometrical surface-information. This also includes the description of the monitoring system for the collection of the displacement data and filter approaches for the estimation of the slopes kinematic behaviour. The necessity of continous monitoring and the effect of data gaps for reliable filter results and the prediction of the future state is discussed. The second part of the presentation is more focussed on the numerical modelling of the slope by FD- (Finite Difference-) methods and the development of the adaptive Kalman-filter. The realisation of the numerical slope model is developed by FLAC3D (software company HCItasca Ltd.). The model contains different geomechanical approaches (like Mohr-Coulomb) and enables the calculation of great deformations and the failure of the slope. Stability parameters (like the factor-of-safety FS) allow the evaluation of the current state of the slope. Until now, the adaptation of relevant material parameters is often performed by trial and error methods. This common method shall be improved by adaptive Kalman-filtering methods. In contrast to trial and error, Kalman-filtering also considers stochastical information of the input data. Especially the estimation of strength parameters (cohesion c, angle of internal friction phi) in a dynamic consideration of the slope is discussed. Problems with conditioning and numerical stability of the filter matrices, memory overflow and computing time are outlined. It is shown that the Kalman-filter is in principle suitable for an semi-automated adaptation process and obtains realistic values for the unknown material parameters.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
NASA Astrophysics Data System (ADS)
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
Approximate Bayesian evaluations of measurement uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
2014-01-01
Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668
Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K
2014-04-04
This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.
DIATOM (Data Initialization and Modification) Library Version 7.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, David A.; Schmitt, Robert G.; Hensinger, David M.
DIATOM is a library that provides numerical simulation software with a computational geometry front end that can be used to build up complex problem geometries from collections of simpler shapes. The library provides a parser which allows for application-independent geometry descriptions to be embedded in simulation software input decks. Descriptions take the form of collections of primitive shapes and/or CAD input files and material properties that can be used to describe complex spatial and temporal distributions of numerical quantities (often called “database variables” or “fields”) to help define starting conditions for numerical simulations. The capability is designed to be generalmore » purpose, robust and computationally efficient. By using a combination of computational geometry and recursive divide-and-conquer approximation techniques, a wide range of primitive shapes are supported to arbitrary degrees of fidelity, controllable through user input and limited only by machine resources. Through the use of call-back functions, numerical simulation software can request the value of a field at any time or location in the problem domain. Typically, this is used only for defining initial conditions, but the capability is not limited to just that use. The most recent version of DIATOM provides the ability to import the solution field from one numerical solution as input for another.« less
Regan, R.S.; Schaffranek, R.W.; Baltzer, R.A.
1996-01-01
A system of functional utilities and computer routines, collectively identified as the Time-Dependent Data System CI DDS), has been developed and documented by the U.S. Geological Survey. The TDDS is designed for processing time sequences of discrete, fixed-interval, time-varying geophysical data--in particular, hydrologic data. Such data include various, dependent variables and related parameters typically needed as input for execution of one-, two-, and three-dimensional hydrodynamic/transport and associated water-quality simulation models. Such data can also include time sequences of results generated by numerical simulation models. Specifically, TDDS provides the functional capabilities to process, store, retrieve, and compile data in a Time-Dependent Data Base (TDDB) in response to interactive user commands or pre-programmed directives. Thus, the TDDS, in conjunction with a companion TDDB, provides a ready means for processing, preparation, and assembly of time sequences of data for input to models; collection, categorization, and storage of simulation results from models; and intercomparison of field data and simulation results. The TDDS can be used to edit and verify prototype, time-dependent data to affirm that selected sequences of data are accurate, contiguous, and appropriate for numerical simulation modeling. It can be used to prepare time-varying data in a variety of formats, such as tabular lists, sequential files, arrays, graphical displays, as well as line-printer plots of single or multiparameter data sets. The TDDB is organized and maintained as a direct-access data base by the TDDS, thus providing simple, yet efficient, data management and access. A single, easily used, program interface that provides all access to and from a particular TDDB is available for use directly within models, other user-provided programs, and other data systems. This interface, together with each major functional utility of the TDDS, is described and documented in this report.
Numerical modelling of gravel unconstrained flow experiments with the DAN3D and RASH3D codes
NASA Astrophysics Data System (ADS)
Sauthier, Claire; Pirulli, Marina; Pisani, Gabriele; Scavia, Claudio; Labiouse, Vincent
2015-12-01
Landslide continuum dynamic models have improved considerably in the last years, but a consensus on the best method of calibrating the input resistance parameter values for predictive analyses has not yet emerged. In the present paper, numerical simulations of a series of laboratory experiments performed at the Laboratory for Rock Mechanics of the EPF Lausanne were undertaken with the RASH3D and DAN3D numerical codes. They aimed at analysing the possibility to use calibrated ranges of parameters (1) in a code different from that they were obtained from and (2) to simulate potential-events made of a material with the same characteristics as back-analysed past-events, but involving a different volume and propagation path. For this purpose, one of the four benchmark laboratory tests was used as past-event to calibrate the dynamic basal friction angle assuming a Coulomb-type behaviour of the sliding mass, and this back-analysed value was then used to simulate the three other experiments, assumed as potential-events. The computational findings show good correspondence with experimental results in terms of characteristics of the final deposits (i.e., runout, length and width). Furthermore, the obtained best fit values of the dynamic basal friction angle for the two codes turn out to be close to each other and within the range of values measured with pseudo-dynamic tilting tests.
Numerical simulations of continuum-driven winds of super-Eddington stars
NASA Astrophysics Data System (ADS)
van Marle, A. J.; Owocki, S. P.; Shaviv, N. J.
2008-09-01
We present the results of numerical simulations of continuum-driven winds of stars that exceed the Eddington limit and compare these against predictions from earlier analytical solutions. Our models are based on the assumption that the stellar atmosphere consists of clumped matter, where the individual clumps have a much larger optical thickness than the matter between the clumps. This `porosity' of the stellar atmosphere reduces the coupling between radiation and matter, since photons tend to escape through the more tenuous gas between the clumps. This allows a star that formally exceeds the Eddington limit to remain stable, yet produce a steady outflow from the region where the clumps become optically thin. We have made a parameter study of wind models for a variety of input conditions in order to explore the properties of continuum-driven winds. The results show that the numerical simulations reproduce quite closely the analytical scalings. The mass-loss rates produced in our models are much larger than can be achieved by line driving. This makes continuum driving a good mechanism to explain the large mass-loss and flow speeds of giant outbursts, as observed in η Carinae and other luminous blue variable stars. Continuum driving may also be important in population III stars, since line driving becomes ineffective at low metallicities. We also explore the effect of photon tiring and the limits it places on the wind parameters.
Towards a comprehensive city emission function (CCEF)
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav
2018-01-01
The comprehensive city emission function (CCEF) is developed for a heterogeneous light-emitting or blocking urban environments, embracing any combination of input parameters that characterize linear dimensions in the system (size and distances between buildings or luminaires), properties of light-emitting elements (such as luminous building façades and street lighting), ground reflectance and total uplight-fraction, all of these defined for an arbitrarily sized 2D area. The analytical formula obtained is not restricted to a single model class as it can capture any specific light-emission feature for wide range of cities. The CCEF method is numerically fast in contrast to what can be expected of other probabilistic approaches that rely on repeated random sampling. Hence the present solution has great potential in light-pollution modeling and can be included in larger numerical models. Our theoretical findings promise great progress in light-pollution modeling as this is the first time an analytical solution to city emission function (CEF) has been developed that depends on statistical mean size and height of city buildings, inter-building separation, prevailing heights of light fixtures, lighting density, and other factors such as e.g. luminaire light output and light distribution, including the amount of uplight, and representative city size. The model is validated for sensitivity and specificity pertinent to combinations of input parameters in order to test its behavior under various conditions, including those that can occur in complex urban environments. It is demonstrated that the solution model succeeds in reproducing a light emission peak at some elevated zenith angles and is consistent with reduced rather than enhanced emission in directions nearly parallel to the ground.
NASA Astrophysics Data System (ADS)
Arason, P.; Barsotti, S.; De'Michieli Vitturi, M.; Jónsson, S.; Arngrímsson, H.; Bergsson, B.; Pfeffer, M. A.; Petersen, G. N.; Bjornsson, H.
2016-12-01
Plume height and mass eruption rate are the principal scale parameters of explosive volcanic eruptions. Weather radars are important instruments in estimating plume height, due to their independence of daylight, weather and visibility. The Icelandic Meteorological Office (IMO) operates two fixed position C-band weather radars and two mobile X-band radars. All volcanoes in Iceland can be monitored by IMO's radar network, and during initial phases of an eruption all available radars will be set to a more detailed volcano scan. When the radar volume data is retrived at IMO-headquarters in Reykjavík, an automatic analysis is performed on the radar data above the proximity of the volcano. The plume height is automatically estimated taking into account the radar scanning strategy, beam width, and a likely reflectivity gradient at the plume top. This analysis provides a distribution of the likely plume height. The automatically determined plume height estimates from the radar data are used as input to a numerical suite that calculates the eruptive source parameters through an inversion algorithm. This is done by using the coupled system DAKOTA-PlumeMoM which solves the 1D plume model equations iteratively by varying the input values of vent radius and vertical velocity. The model accounts for the effect of wind on the plume dynamics, using atmospheric vertical profiles extracted from the ECMWF numerical weather prediction model. Finally, the resulting estimates of mass eruption rate are used to initialize the dispersal model VOL-CALPUFF to assess hazard due to tephra fallout, and communicated to London VAAC to support their modelling activity for aviation safety purposes.
NASA Astrophysics Data System (ADS)
Casado-Pascual, Jesús; Denk, Claus; Gómez-Ordóñez, José; Morillo, Manuel; Hänggi, Peter
2003-03-01
In the context of the phenomenon of stochastic resonance (SR), we study the correlation function, the signal-to-noise ratio (SNR), and the ratio of output over input SNR, i.e., the gain, which is associated to the nonlinear response of a bistable system driven by time-periodic forces and white Gaussian noise. These quantifiers for SR are evaluated using the techniques of linear response theory (LRT) beyond the usually employed two-mode approximation scheme. We analytically demonstrate within such an extended LRT description that the gain can indeed not exceed unity. We implement an efficient algorithm, based on work by Greenside and Helfand (detailed in the Appendix), to integrate the driven Langevin equation over a wide range of parameter values. The predictions of LRT are carefully tested against the results obtained from numerical solutions of the corresponding Langevin equation over a wide range of parameter values. We further present an accurate procedure to evaluate the distinct contributions of the coherent and incoherent parts of the correlation function to the SNR and the gain. As a main result we show for subthreshold driving that both the correlation function and the SNR can deviate substantially from the predictions of LRT and yet the gain can be either larger or smaller than unity. In particular, we find that the gain can exceed unity in the strongly nonlinear regime which is characterized by weak noise and very slow multifrequency subthreshold input signals with a small duty cycle. This latter result is in agreement with recent analog simulation results by Gingl et al. [ICNF 2001, edited by G. Bosman (World Scientific, Singapore, 2002), pp. 545 548; Fluct. Noise Lett. 1, L181 (2001)].
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Scaling water and energy fluxes in climate systems - Three land-atmospheric modeling experiments
NASA Technical Reports Server (NTRS)
Wood, Eric F.; Lakshmi, Venkataraman
1993-01-01
Three numerical experiments that investigate the scaling of land-surface processes - either of the inputs or parameters - are reported, and the aggregated processes are compared to the spatially variable case. The first is the aggregation of the hydrologic response in a catchment due to rainfall during a storm event and due to evaporative demands during interstorm periods. The second is the spatial and temporal aggregation of latent heat fluxes, as calculated from SiB. The third is the aggregation of remotely sensed land vegetation and latent and sensible heat fluxes using TM data from the FIFE experiment of 1987 in Kansas. In all three experiments it was found that the surface fluxes and land characteristics can be scaled, and that macroscale models based on effective parameters are sufficient to account for the small-scale heterogeneities investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Peiffer, Loic; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher (1984, Geochim. Cosmichim. Acta 46 513–528) into a stand-alone computer program, to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization using existing parameter estimationmore » software, such as iTOUGH2, PEST, or UCODE. This integrated geothermometry approach presents advantages over classical geothermometers for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss.« less
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
Yu, Manzhu; Yang, Chaowei
2016-01-01
Dust storms are devastating natural disasters that cost billions of dollars and many human lives every year. Using the Non-Hydrostatic Mesoscale Dust Model (NMM-dust), this research studies how different spatiotemporal resolutions of two input parameters (soil moisture and greenness vegetation fraction) impact the sensitivity and accuracy of a dust model. Experiments are conducted by simulating dust concentration during July 1-7, 2014, for the target area covering part of Arizona and California (31, 37, -118, -112), with a resolution of ~ 3 km. Using ground-based and satellite observations, this research validates the temporal evolution and spatial distribution of dust storm output from the NMM-dust, and quantifies model error using measurements of four evaluation metrics (mean bias error, root mean square error, correlation coefficient and fractional gross error). Results showed that the default configuration of NMM-dust (with a low spatiotemporal resolution of both input parameters) generates an overestimation of Aerosol Optical Depth (AOD). Although it is able to qualitatively reproduce the temporal trend of the dust event, the default configuration of NMM-dust cannot fully capture its actual spatial distribution. Adjusting the spatiotemporal resolution of soil moisture and vegetation cover datasets showed that the model is sensitive to both parameters. Increasing the spatiotemporal resolution of soil moisture effectively reduces model's overestimation of AOD, while increasing the spatiotemporal resolution of vegetation cover changes the spatial distribution of reproduced dust storm. The adjustment of both parameters enables NMM-dust to capture the spatial distribution of dust storms, as well as reproducing more accurate dust concentration.
NASA Astrophysics Data System (ADS)
Bertin, Daniel
2017-02-01
An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.
GeV-scale hot sterile neutrino oscillations: a numerical solution
NASA Astrophysics Data System (ADS)
Ghiglieri, J.; Laine, M.
2018-02-01
The scenario of baryogenesis through GeV-scale sterile neutrino oscillations is governed by non-linear differential equations for the time evolution of a sterile neutrino density matrix and Standard Model lepton and baryon asymmetries. By employing up-to-date rate coefficients and a non-perturbatively estimated Chern-Simons diffusion rate, we present a numerical solution of this system, incorporating the full momentum and helicity dependences of the density matrix. The density matrix deviates significantly from kinetic equilibrium, with the IR modes equilibrating much faster than the UV modes. For equivalent input parameters, our final results differ moderately (˜50%) from recent benchmarks in the literature. The possibility of producing an observable baryon asymmetry is nevertheless confirmed. We illustrate the dependence of the baryon asymmetry on the sterile neutrino mass splitting and on the CP-violating phase measurable in active neutrino oscillation experiments.
NASA Astrophysics Data System (ADS)
Tokarczyk, Jarosław
2016-12-01
Method for identification the effects of dynamic overload affecting the people, which may occur in the emergency state of suspended monorail is presented in the paper. The braking curve using MBS (Multi-Body System) simulation was determined. For this purpose a computational model (MBS) of suspended monorail was developed and two different variants of numerical calculations were carried out. An algorithm of conducting numerical simulations to assess the effects of dynamic overload acting on the suspended monorails' users is also posted in the paper. An example of computational model FEM (Finite Element Method) composed of technical mean and the anthropometrical model ATB (Articulated Total Body) is shown. The simulation results are presented: graph of HIC (Head Injury Criterion) parameter and successive phases of dislocation of ATB model. Generator of computational models for safety criterion, which enables preparation of input data and remote starting the simulation, is proposed.
NASA Astrophysics Data System (ADS)
Davis, L. J.; Boggess, M.; Kodpuak, E.; Deutsch, M.
2012-11-01
We report on a model for the deposition of three dimensional, aggregated nanocrystalline silver films, and an efficient numerical simulation method developed for visualizing such structures. We compare our results to a model system comprising chemically deposited silver films with morphologies ranging from dilute, uniform distributions of nanoparticles to highly porous aggregated networks. Disordered silver films grown in solution on silica substrates are characterized using digital image analysis of high resolution scanning electron micrographs. While the latter technique provides little volume information, plane-projected (two dimensional) island structure and surface coverage may be reliably determined. Three parameters governing film growth are evaluated using these data and used as inputs for the deposition model, greatly reducing computing requirements while still providing direct access to the complete (bulk) structure of the films throughout the growth process. We also show how valuable three dimensional characteristics of the deposited materials can be extracted using the simulated structures.
A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves
NASA Astrophysics Data System (ADS)
Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.
2012-04-01
The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim
2014-07-01
The outcome of secondary fracture healing processes is strongly influenced by interfragmentary motion. Shear movement is assumed to be more disadvantageous than axial movement, however, experimental results are contradictory. Numerical fracture healing models allow simulation of the fracture healing process with variation of single input parameters and under comparable, normalized mechanical conditions. Thus, a comparison of the influence of different loading directions on the healing process is possible. In this study we simulated fracture healing under several axial compressive, and translational and torsional shear movement scenarios, and compared their respective healing times. Therefore, we used a calibrated numerical model for fracture healing in sheep. Numerous variations of movement amplitudes and musculoskeletal loads were simulated for the three loading directions. Our results show that isolated axial compression was more beneficial for the fracture healing success than both isolated shearing conditions for load and displacement magnitudes which were identical as well as physiological different, and even for strain-based normalized comparable conditions. Additionally, torsional shear movements had less impeding effects than translational shear movements. Therefore, our findings suggest that osteosynthesis implants can be optimized, in particular, to limit translational interfragmentary shear under musculoskeletal loading. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Abramov, G. V.; Gavrilov, A. N.
2018-03-01
The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.
Which insect species numerically respond to allochthonous inputs?
NASA Astrophysics Data System (ADS)
Sugiura, Shinji; Ikeda, Hiroshi
2013-08-01
Herons (Ardeidae) frequently breed in inland forests and provide organic material in the form of carcasses of prey (that they drop) and chicks (that die) to the forest floor. Such allochthonous inputs of organic materials are known to increase arthropod populations in forests. However, the exact species that show numerical responses to allochthonous inputs in heron breeding colonies remains unclear. Very few studies have clarified which factors determine numerical responses in individual species. We used pitfall and baited traps to compare the densities of arthropods between forest patches in heron breeding colonies (five sites) and areas outside of colonies (five sites) in central Japan. The density of all arthropods was not significantly different between colonies and non-colony areas. However, significant differences between colonies and non-colony areas were found in four arthropod groups. Earwigs (Dermaptera: Anisolabididae), hister beetles (Coleoptera: Histeridae), and carrion beetles (Coleoptera: Silphidae) were more abundant in colonies, while ants (Hymenoptera: Formicidae) were less abundant in colonies. We detected numerical responses to heron breeding in two earwig, one histerid, five silphid, and one ant species. Chick and prey carcasses from herons may have directly led to increases in consumer populations such as earwigs, histerids, and silphids in colonies, while microenvironmental changes caused by heron breeding may have reduced ant abundance. In the Silphidae, five species showed numerical responses to allochthonous inputs, and the other two species did not. Numerical responses in individual species may have been determined by life history traits such as reproductive behaviour.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Hybrid rocket engine, theoretical model and experiment
NASA Astrophysics Data System (ADS)
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
Thermal conductivity model for nanoporous thin films
NASA Astrophysics Data System (ADS)
Huang, Congliang; Zhao, Xinpeng; Regner, Keith; Yang, Ronggui
2018-03-01
Nanoporous thin films have attracted great interest because of their extremely low thermal conductivity and potential applications in thin thermal insulators and thermoelectrics. Although there are some numerical and experimental studies about the thermal conductivity of nanoporous thin films, a simplified model is still needed to provide a straightforward prediction. In this paper, by including the phonon scattering lifetimes due to film thickness boundary scattering, nanopore scattering and the frequency-dependent intrinsic phonon-phonon scattering, a fitting-parameter-free model based on the kinetic theory of phonon transport is developed to predict both the in-plane and the cross-plane thermal conductivities of nanoporous thin films. With input parameters such as the lattice constants, thermal conductivity, and the group velocity of acoustic phonons of bulk silicon, our model shows a good agreement with available experimental and numerical results of nanoporous silicon thin films. It illustrates that the size effect of film thickness boundary scattering not only depends on the film thickness but also on the size of nanopores, and a larger nanopore leads to a stronger size effect of the film thickness. Our model also reveals that there are different optimal structures for getting the lowest in-plane and cross-plane thermal conductivities.
Probing the tides in interacting galaxy pairs
NASA Technical Reports Server (NTRS)
Borne, Kirk D.
1990-01-01
Detailed spectroscopic and imaging observations of colliding elliptical galaxies revealed unmistakable diagnostic signatures of the tidal interactions. It is possible to compare both the distorted luminosity distributions and the disturbed internal rotation profiles with numerical simulations in order to model the strength of the tidal gravitational field acting within a given pair of galaxies. Using the best-fit numerical model, one can then measure directly the mass of a specific interacting binary system. This technique applies to individual pairs and therefore complements the classical methods of measuring the masses of galaxy pairs in well-defined statistical samples. The 'personalized' modeling of galaxy pairs also permits the derivation of each binary's orbit, spatial orientation, and interaction timescale. Similarly, one can probe the tides in less-detailed observations of disturbed galaxies in order to estimate some of the physical parameters for larger samples of interacting galaxy pairs. These parameters are useful inputs to the more universal problems of (1) the galaxy merger rate, (2) the strength and duration of the driving forces behind tidally stimulated phenomena (e.g., starbursts and maybe quasi steller objects), and (3) the identification of long-lived signatures of interaction/merger events.
Pirozzi, Enrica
2018-04-01
High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.
NASA Technical Reports Server (NTRS)
Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.
2016-01-01
The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.
Branch Input Resistance and Steady Attenuation for Input to One Branch of a Dendritic Neuron Model
Rall, Wilfrid; Rinzel, John
1973-01-01
Mathematical solutions and numerical illustrations are presented for the steady-state distribution of membrane potential in an extensively branched neuron model, when steady electric current is injected into only one dendritic branch. Explicit expressions are obtained for input resistance at the branch input site and for voltage attenuation from the input site to the soma; expressions for AC steady-state input impedance and attenuation are also presented. The theoretical model assumes passive membrane properties and the equivalent cylinder constraint on branch diameters. Numerical examples illustrate how branch input resistance and steady attenuation depend upon the following: the number of dendritic trees, the orders of dendritic branching, the electrotonic length of the dendritic trees, the location of the dendritic input site, and the input resistance at the soma. The application to cat spinal motoneurons, and to other neuron types, is discussed. The effect of a large dendritic input resistance upon the amount of local membrane depolarization at the synaptic site, and upon the amount of depolarization reaching the soma, is illustrated and discussed; simple proportionality with input resistance does not hold, in general. Also, branch input resistance is shown to exceed the input resistance at the soma by an amount that is always less than the sum of core resistances along the path from the input site to the soma. PMID:4715583
Non-steady state simulation of BOM removal in drinking water biofilters: model development.
Hozalski, R M; Bouwer, E J
2001-01-01
A numerical model was developed to simulate the non-steady-state behavior of biologically-active filters used for drinking water treatment. The biofilter simulation model called "BIOFILT" simulates the substrate (biodegradable organic matter or BOM) and biomass (both attached and suspended) profiles in a biofilter as a function of time. One of the innovative features of BIOFILT compared to previous biofilm models is the ability to simulate the effects of a sudden loss in attached biomass or biofilm due to filter backwash on substrate removal performance. A sensitivity analysis of the model input parameters indicated that the model simulations were most sensitive to the values of parameters that controlled substrate degradation and biofilm growth and accumulation including the substrate diffusion coefficient, the maximum rate of substrate degradation, the microbial yield coefficient, and a dimensionless shear loss coefficient. Variation of the hydraulic loading rate or other parameters that controlled the deposition of biomass via filtration did not significantly impact the simulation results.
Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio;
2016-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors
NASA Technical Reports Server (NTRS)
Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.
2014-01-01
Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
Impregnation of Composite Materials: a Numerical Study
NASA Astrophysics Data System (ADS)
Baché, Elliott; Dupleix-Couderc, Chloé; Arquis, Eric; Berdoyes, Isabelle
2017-12-01
Oxide ceramic matrix composites are currently being developed for aerospace applications such as the exhaust, where the parts are subject to moderately high temperatures (≈ 700 ∘C) and oxidation. These composite materials are normally formed by, among other steps, impregnating a ceramic fabric with a slurry of ceramic particles. This impregnation process can be complex, with voids possibly forming in the fabric depending on the process parameters and material properties. Unwanted voids or macroporosity within the fabric can decrease the mechanical properties of the parts. In order to design an efficient manufacturing process able to impregnate the fabric well, numerical simulations may be used to design the process as well as the slurry. In this context, a tool is created for modeling different processes. Thétis, which solves the Navier-Stokes-Darcy-Brinkman equation using finite volumes, is expanded to take into account capillary pressures on the mesoscale. This formulation allows for more representativity than for Darcy's law (homogeneous preform) simulations while avoiding the prohibitive simulation times of a full discretization for the composing fibers at the representative elementary volume scale. The resulting tool is first used to investigate the effect of varying the slurry parameters on impregnation evolution. Two different processes, open bath impregnation and wet lay-up, are then studied with emphasis on varying their input parameters (e.g. inlet velocity).
Regional scale groundwater modelling study for Ganga River basin
NASA Astrophysics Data System (ADS)
Maheswaran, R.; Khosa, R.; Gosain, A. K.; Lahari, S.; Sinha, S. K.; Chahar, B. R.; Dhanya, C. T.
2016-10-01
Subsurface movement of water within the alluvial formations of Ganga Basin System of North and East India, extending over an area of 1 million km2, was simulated using Visual MODFLOW based transient numerical model. The study incorporates historical groundwater developments as recorded by various concerned agencies and also accommodates the role of some of the major tributaries of River Ganga as geo-hydrological boundaries. Geo-stratigraphic structures, along with corresponding hydrological parameters,were obtained from Central Groundwater Board, India,and used in the study which was carried out over a time horizon of 4.5 years. The model parameters were fine tuned for calibration using Parameter Estimation (PEST) simulations. Analyses of the stream aquifer interaction using Zone Budget has allowed demarcation of the losing and gaining stretches along the main stem of River Ganga as well as some of its principal tributaries. From a management perspective,and entirely consistent with general understanding, it is seen that unabated long term groundwater extraction within the study basin has induced a sharp decrease in critical dry weather base flow contributions. In view of a surge in demand for dry season irrigation water for agriculture in the area, numerical models can be a useful tool to generate not only an understanding of the underlying groundwater system but also facilitate development of basin-wide detailed impact scenarios as inputs for management and policy action.
Direct statistical modeling and its implications for predictive mapping in mining exploration
NASA Astrophysics Data System (ADS)
Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila
2010-05-01
Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.
Active remote sensing of snow using NMM3D/DMRT and comparison with CLPX II airborne data
Xu, X.; Liang, D.; Tsang, L.; Andreadis, K.M.; Josberger, E.G.; Lettenmaier, D.P.; Cline, D.W.; Yueh, S.H.
2010-01-01
We applied the Numerical Maxwell Model of three-dimensional simulations (NMM3D) in the Dense Media Radiative Theory (DMRT) to calculate backscattering coefficients. The particles' positions are computer-generated and the subsequent Foldy-Lax equations solved numerically. The phase matrix in NMM3D has significant cross-polarization, particularly when the particles are densely packed. The NMM3D model is combined with DMRT in calculating the microwave scattering by dry snow. The NMM3D/DMRT equations are solved by an iterative solution up to the second order in the case of small to moderate optical thickness. The numerical results of NMM3D/DMRT are illustrated and compared with QCA/DMRT. The QCA/DMRT and NMM3D/DMRT results are also applied to compare with data from two specific datasets from the second Cold Land Processes Experiment (CLPX II) in Alaska and Colorado. The data are obtained at the Ku-band (13.95 GHz) observations using airborne imaging polarimetric scatterometer (POLSCAT). It is shown that the model predictions agree with the field measurements for both co-polarization and cross-polarization. For the Alaska region, the average snow depth and snow density are used as the inputs for DMRT. The grain size, selected from within the range of the ground measurements, is used as a best-fit parameter within the range. For the Colorado region, we use the Variable Infiltration Capacity Model (VIC) to obtain the input snow profiles for NMM3D/DMRT. ?? 2010 IEEE.
EnviroNET: An on-line environment data base for LDEF data
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1992-01-01
EnviroNET is an on-line, free form data base intended to provide a centralized depository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user friendly, menu driven format on networks that are connected globally and is available twenty-four hours a day, every day. The information updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government facilities, industry, universities, and ESA. The models accept parameter input from the user and calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic field, and ionosphere. A user friendly informative interface is standard for all the models with a pop-up window, help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solution for computational intense graphical applications to do 'What if' scenarios. A proposed plan for developing a repository of LDEF information for a user group concludes the presentation.
A user-friendly software package to ease the use of VIC hydrologic model for practitioners
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P.; Brown, C.
2016-12-01
The VIC (Variable Infiltration Capacity) hydrologic and river routing model simulates the water and energy fluxes that occur near the land surface and provides users with useful information regarding the quantity and timing of available water at points of interest within the basin. However, despite its popularity (proved by numerous applications in the literature), its wider adoption is hampered by the considerable effort required to prepare model inputs; e.g., input files storing spatial information related to watershed topography, soil properties, and land cover. This study presents a user-friendly software package (named VIC Setup Toolkit) developed within the MATLAB (matrix laboratory) framework and accessible through an intuitive graphical user interface. The VIC Setup Toolkit enables users to navigate the model building process confidently through prompts and automation, with an intention to promote the use of the model for both practical and academic purposes. The automated processes include watershed delineation, climate and geographical input set-up, model parameter calibration, graph generation and output evaluation. We demonstrate the package's usefulness in various case studies with the American River, Oklahoma River, Feather River and Zambezi River basins.
Elçi, A; Karadaş, D; Fistikoğlu, O
2010-01-01
A numerical modeling case study of groundwater flow in a diffuse pollution prone area is presented. The study area is located within the metropolitan borders of the city of Izmir, Turkey. This groundwater flow model was unconventional in the application since the groundwater recharge parameter in the model was estimated using a lumped, transient water-budget based precipitation-runoff model that was executed independent of the groundwater flow model. The recharge rate obtained from the calibrated precipitation-runoff model was used as input to the groundwater flow model, which was eventually calibrated to measured water table elevations. Overall, the flow model results were consistent with field observations and model statistics were satisfactory. Water budget results of the model revealed that groundwater recharge comprised about 20% of the total water input for the entire study area. Recharge was the second largest component in the budget after leakage from streams into the subsurface. It was concluded that the modeling results can be further used as input for contaminant transport modeling studies in order to evaluate the vulnerability of water resources of the study area to diffuse pollution.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
Numerical and experimental study of the dynamics of a superheated jet
NASA Astrophysics Data System (ADS)
Sinha, Avick; Gopalakrishnan, Shivasubramanian; Balasubramanian, Sridhar
2015-11-01
Flash-boiling is a phenomenon where a liquid experiences low pressures in a system resulting in it getting superheated. The sudden drop in pressures results in accelerated expansion and violent vapour formation. Understanding the physics behind the jet disintegration and flash-boiling phenomenon is still an open problem, with applications in automotive and aerospace combustors. The behaviour of a flash-boiling jet is highly dependent on the input parameters, inlet temperature and pressure. In the present study, the external (outside nozzle) and the internal (inside nozzle) flow characteristics of the two-phase flow has been studied numerically and experimentally. The phase change from liquid to vapour takes place over a finite period of time, modeled sing Homogeneous Relaxation Model (HRM). In order to validate the numerical results, controlled experiments were performed. Optical diagnostic techniques such as Particle Image Velocimetry (PIV) and Shadowgraphy were used to study the flow characteristics. Spray angle, penetration depth, droplet spectra were obtained which provides a better understanding of the break-up mechanism. Linear stability analysis is performed to study the stability characteristics of the jet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belov, A. S., E-mail: alexis-belov@yandex.ru
2015-10-15
Results of numerical simulations of the near-Earth plasma perturbations induced by powerful HF radio waves from the SURA heating facility are presented. The simulations were performed using a modified version of the SAMI2 ionospheric model for the input parameters corresponding to the series of in-situ SURA–DEMETER experiments. The spatial structure and developmental dynamics of large-scale plasma temperature and density perturbations have been investigated. The characteristic formation and relaxation times of the induced large-scale plasma perturbations at the altitudes of the Earth’s outer ionosphere have been determined.
Arteaga-Sierra, F R; Milián, C; Torres-Gómez, I; Torres-Cisneros, M; Moltó, G; Ferrando, A
2014-09-22
We present a numerical strategy to design fiber based dual pulse light sources exhibiting two predefined spectral peaks in the anomalous group velocity dispersion regime. The frequency conversion is based on the soliton fission and soliton self-frequency shift occurring during supercontinuum generation. The optimization process is carried out by a genetic algorithm that provides the optimum input pulse parameters: wavelength, temporal width and peak power. This algorithm is implemented in a Grid platform in order to take advantage of distributed computing. These results are useful for optical coherence tomography applications where bell-shaped pulses located in the second near-infrared window are needed.
Modal testing with Asher's method using a Fourier analyzer and curve fitting
NASA Technical Reports Server (NTRS)
Gold, R. R.; Hallauer, W. L., Jr.
1979-01-01
An unusual application of the method proposed by Asher (1958) for structural dynamic and modal testing is discussed. Asher's method has the capability, using the admittance matrix and multiple-shaker sinusoidal excitation, of separating structural modes having indefinitely close natural frequencies. The present application uses Asher's method in conjunction with a modern Fourier analyzer system but eliminates the necessity of exciting the test structure simultaneously with several shakers. Evaluation of this approach with numerically simulated data demonstrated its effectiveness; the parameters of two modes having almost identical natural frequencies were accurately identified. Laboratory evaluation of this approach was inconclusive because of poor experimental input data.
Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.
Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li
2014-11-17
All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.
NASA Technical Reports Server (NTRS)
Wetzel, Peter J.; Chang, Jy-Tai
1988-01-01
Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.
Fast-axial turbulent flow CO2 laser output characteristics and scaling parameters
NASA Astrophysics Data System (ADS)
Dembovetsky, V. V.; Zavalova, Valentina Y.; Zavalov, Yuri N.
1996-04-01
The paper presents the experimental results of evaluating the output characteristics of TLA- 600 carbon-dioxide laser with axial turbulent gas flow, as well as the results of numerical modeling. The output characteristic and spatial distribution of laser beam were measured with regard to specific energy input, working mixture pressure, active media length and output mirror reflection. The paper presents the results of experimental and theoretical study and design decisions on a succession of similar type industrial carbon-dioxide lasers with fast-axial gas-flow and dc discharge excitation of active medium developed at NICTL RAN. As an illustration, characteristics of the TLA-600 laser are cited.
Computer program documentation for the dynamic analysis of a noncontacting mechanical face seal
NASA Technical Reports Server (NTRS)
Auer, B. M.; Etsion, I.
1980-01-01
A computer program is presented which achieves a numerical solution for the equations of motion of a noncontacting mechanical face seal. The flexibly-mounted primary seal ring motion is expressed by a set of second order differential equations for three degrees of freedom. These equations are reduced to a set of first order equations and the GEAR software package is used to solve the set of first order equations. Program input includes seal design parameters and seal operating conditions. Output from the program includes velocities and displacements of the seal ring about the axis of an inertial reference system. One example problem is described.
Probabilistic seismic hazard study based on active fault and finite element geodynamic models
NASA Astrophysics Data System (ADS)
Kastelic, Vanja; Carafa, Michele M. C.; Visini, Francesco
2016-04-01
We present a probabilistic seismic hazard analysis (PSHA) that is exclusively based on active faults and geodynamic finite element input models whereas seismic catalogues were used only in a posterior comparison. We applied the developed model in the External Dinarides, a slow deforming thrust-and-fold belt at the contact between Adria and Eurasia.. is the Our method consists of establishing s two earthquake rupture forecast models: (i) a geological active fault input (GEO) model and, (ii) a finite element (FEM) model. The GEO model is based on active fault database that provides information on fault location and its geometric and kinematic parameters together with estimations on its slip rate. By default in this model all deformation is set to be released along the active faults. The FEM model is based on a numerical geodynamic model developed for the region of study. In this model the deformation is, besides along the active faults, released also in the volumetric continuum elements. From both models we calculated their corresponding activity rates, its earthquake rates and their final expected peak ground accelerations. We investigated both the source model and the earthquake model uncertainties by varying the main active fault and earthquake rate calculation parameters through constructing corresponding branches of the seismic hazard logic tree. Hazard maps and UHS curves have been produced for horizontal ground motion on bedrock conditions VS 30 ≥ 800 m/s), thereby not considering local site amplification effects. The hazard was computed over a 0.2° spaced grid considering 648 branches of the logic tree and the mean value of 10% probability of exceedance in 50 years hazard level, while the 5th and 95th percentiles were also computed to investigate the model limits. We conducted a sensitivity analysis to control which of the input parameters influence the final hazard results in which measure. The results of such comparison evidence the deformation model and with their internal variability together with the choice of the ground motion prediction equations (GMPEs) are the most influencing parameter. Both of these parameters have significan affect on the hazard results. Thus having good knowledge of the existence of active faults and their geometric and activity characteristics is of key importance. We also show that PSHA models based exclusively on active faults and geodynamic inputs, which are thus not dependent on past earthquake occurrences, provide a valid method for seismic hazard calculation.
A comparative study of radiofrequency antennas for Helicon plasma sources
NASA Astrophysics Data System (ADS)
Melazzi, D.; Lancellotti, V.
2015-04-01
Since Helicon plasma sources can efficiently couple power and generate high-density plasma, they have received interest also as spacecraft propulsive devices, among other applications. In order to maximize the power deposited into the plasma, it is necessary to assess the performance of the radiofrequency (RF) antenna that drives the discharge, as typical plasma parameters (e.g. the density) are varied. For this reason, we have conducted a comparative analysis of three Helicon sources which feature different RF antennas, namely, the single-loop, the Nagoya type-III and the fractional helix. These antennas are compared in terms of input impedance and induced current density; in particular, the real part of the impedance constitutes a measure of the antenna ability to couple power into the plasma. The results presented in this work have been obtained through a full-wave approach which (being hinged on the numerical solution of a system of integral equations) allows computing the antenna current and impedance self-consistently. Our findings indicate that certain combinations of plasma parameters can indeed maximize the real part of the input impedance and, thus, the deposited power, and that one of the three antennas analyzed performs best for a given plasma. Furthermore, unlike other strategies which rely on approximate antenna models, our approach enables us to reveal that the antenna current density is not spatially uniform, and that a correlation exists between the plasma parameters and the spatial distribution of the current density.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
Transcritical flow of a stratified fluid over topography: analysis of the forced Gardner equation
NASA Astrophysics Data System (ADS)
Kamchatnov, A. M.; Kuo, Y.-H.; Lin, T.-C.; Horng, T.-L.; Gou, S.-C.; Clift, R.; El, G. A.; Grimshaw, R. H. J.
2013-12-01
Transcritical flow of a stratified fluid past a broad localised topographic obstacle is studied analytically in the framework of the forced extended Korteweg--de Vries (eKdV), or Gardner, equation. We consider both possible signs for the cubic nonlinear term in the Gardner equation corresponding to different fluid density stratification profiles. We identify the range of the input parameters: the oncoming flow speed (the Froude number) and the topographic amplitude, for which the obstacle supports a stationary localised hydraulic transition from the subcritical flow upstream to the supercritical flow downstream. Such a localised transcritical flow is resolved back into the equilibrium flow state away from the obstacle with the aid of unsteady coherent nonlinear wave structures propagating upstream and downstream. Along with the regular, cnoidal undular bores occurring in the analogous problem for the single-layer flow modeled by the forced KdV equation, the transcritical internal wave flows support a diverse family of upstream and downstream wave structures, including solibores, rarefaction waves, reversed and trigonometric undular bores, which we describe using the recent development of the nonlinear modulation theory for the (unforced) Gardner equation. The predictions of the developed analytic construction are confirmed by direct numerical simulations of the forced Gardner equation for a broad range of input parameters.
Phase transition of Boolean networks with partially nested canalizing functions
NASA Astrophysics Data System (ADS)
Jansen, Kayse; Matache, Mihaela Teodora
2013-07-01
We generate the critical condition for the phase transition of a Boolean network governed by partially nested canalizing functions for which a fraction of the inputs are canalizing, while the remaining non-canalizing inputs obey a complementary threshold Boolean function. Past studies have considered the stability of fully or partially nested canalizing functions paired with random choices of the complementary function. In some of those studies conflicting results were found with regard to the presence of chaotic behavior. Moreover, those studies focus mostly on ergodic networks in which initial states are assumed equally likely. We relax that assumption and find the critical condition for the sensitivity of the network under a non-ergodic scenario. We use the proposed mathematical model to determine parameter values for which phase transitions from order to chaos occur. We generate Derrida plots to show that the mathematical model matches the actual network dynamics. The phase transition diagrams indicate that both order and chaos can occur, and that certain parameters induce a larger range of values leading to order versus chaos. The edge-of-chaos curves are identified analytically and numerically. It is shown that the depth of canalization does not cause major dynamical changes once certain thresholds are reached; these thresholds are fairly small in comparison to the connectivity of the nodes.
Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2015-01-01
Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less
NASA Astrophysics Data System (ADS)
den, Mitsue; Amo, Hiroyoshi; Sugihara, Kohta; Takei, Toshifumi; Ogawa, Tomoya; Tanaka, Takashi; Watari, Shinichi
We describe prediction system of the 1-AU arrival times of interplanetary shock waves associated with coromal mass ejections (CMEs). The system is based on modeling of the shock propagation using a three-dimensional adaptive mesh refinement (AMR) code. Once a CME is observed by LASCO/SOHO, firstly ambient solar wind is obtained by numerical simulation, which reproduces the solar wind parameters at that time observed by ACE spacecraft. Then we input the expansion speed and occurrence position data of that CME as initial condtions for an CME model, and 3D simulation of the CME and the shock propagation is perfomed until the shock wave passes the 1-AU. Input the parameters, execution of simulation and output of the result are available on Web, so a person who is not familiar with operation of computer or simulations or is not a researcher can use this system to predict the shock passage time. Simulated CME and shock evolution is visuallized at the same time with simulation and snap shots appear on the web automatically, so that user can follow the propagation. This system is expected to be useful for forecasters of space weather. We will describe the system and simulation model in detail.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.
Curve Number Application in Continuous Runoff Models: An Exercise in Futility?
NASA Astrophysics Data System (ADS)
Lamont, S. J.; Eli, R. N.
2006-12-01
The suitability of applying the NRCS (Natural Resource Conservation Service) Curve Number (CN) to continuous runoff prediction is examined by studying the dependence of CN on several hydrologic variables in the context of a complex nonlinear hydrologic model. The continuous watershed model Hydrologic Simulation Program-FORTRAN (HSPF) was employed using a simple theoretical watershed in two numerical procedures designed to investigate the influence of soil type, soil depth, storm depth, storm distribution, and initial abstraction ratio value on the calculated CN value. This study stems from a concurrent project involving the design of a hydrologic modeling system to support the Cumulative Hydrologic Impact Assessments (CHIA) of over 230 coal-mined watersheds throughout West Virginia. Because of the large number of watersheds and limited availability of data necessary for HSPF calibration, it was initially proposed that predetermined CN values be used as a surrogate for those HSPF parameters controlling direct runoff. A soil physics model was developed to relate CN values to those HSPF parameters governing soil moisture content and infiltration behavior, with the remaining HSPF parameters being adopted from previous calibrations on real watersheds. A numerical procedure was then adopted to back-calculate CN values from the theoretical watershed using antecedent moisture conditions equivalent to the NRCS Antecedent Runoff Condition (ARC) II. This procedure used the direct runoff produced from a cyclic synthetic storm event time series input to HSPF. A second numerical method of CN determination, using real time series rainfall data, was used to provide a comparison to those CN values determined using the synthetic storm event time series. It was determined that the calculated CN values resulting from both numerical methods demonstrated a nonlinear dependence on all of the computational variables listed above. It was concluded that the use of the Curve Number as a surrogate for the selected subset of HPSF parameters could not be justified. These results suggest that use of the Curve Number in other complex continuous time series hydrologic models may not be appropriate, given the limitations inherent in the definition of the NRCS CN method.
Water and solute transport parameterization form a soil of semi-arid region of northeast of Brazil
NASA Astrophysics Data System (ADS)
Netto, A. M.; Antonino, A. C. D.; Lima, L. J. S.; Angulo-Jaramillo, R.; Montenegro, S. M. G.
2003-04-01
Water and solute transfer modeling needs the transport parameters as input data. Classical theory, Fickian advection-dispersion, is not successfully applied to account for solute transport along with preferential flow pathways. This transport may be operating at scales smaller than spatial discretization used in a field scale numerical model. An axisymetric infiltration using a single ring infiltrometer along with a conservative tracer (Cl^-) is an efficient and easy method to use in fields tools. Two experiments were accomplished on a Yellow Oxissol in a 4,0 ha area in Centro de Ciências Agrárias, UFPB, Areia City, Paraíba State, Brazil (6^o 58'S, 35o 41'W and 645 m), in a 50 × 50 m grid (16 points): a) cultivated with beans (Vigna Unguinculata (L.) Walp.), and b) bare soil after harvest. The unsaturated hydraulic conductivity K and sorptivity S were estimated from short time or long time analysis of cumulative three dimensional infiltration. Single tracer technique was used for the calculation of mobile water fraction f by measuring the solute concentration underneath the ring infiltrometer, at the end of infiltration. A solute transfer numerical model, based on the mobile-immobile water concept, was used for the determination of the solute transport parameters. The mobile water fraction f, the dispersion coefficient D, and the mass transfer coefficient α, were estimated from both the measured infiltration depth and concentration profile underneath the ring infiltrometer. The presence of preferential flow was due to the soil nature (aggregated soil, macropores, flux instabilities and heterogeneity). The lateral solute transfer is not only diffusive but also convective. The parameters deduced from the numerical model associated to the solute profile concentration are representative of this phenomenon.
On-line applications of numerical models in the Black Sea GIS
NASA Astrophysics Data System (ADS)
Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Nikolaidis, M.; Stylianou, Stavros
2017-09-01
The Black Sea Geographical Information System (GIS) is developed based on cutting edge information technologies, and provides automated data processing and visualization on-line. Mapserver is used as a mapping service; the data are stored in MySQL DBMS; PHP and Python modules are utilized for data access, processing, and exchange. New numerical models can be incorporated in the GIS environment as individual software modules, compiled for a server-based operational system, providing interaction with the GIS. A common interface allows setting the input parameters; then the model performs the calculation of the output data in specifically predefined files and format. The calculation results are then passed to the GIS for visualization. Initially, a test scenario of integration of a numerical model into the GIS was performed, using software, developed to describe a two-dimensional tsunami propagation in variable basin depth, based on a linear long surface wave model which is legitimate for more than 5 m depth. Furthermore, the well established oil spill and trajectory 3-D model MEDSLIK (http://www.oceanography.ucy.ac.cy/medslik/) was integrated into the GIS with more advanced GIS functionality and capabilities. MEDSLIK is able to forecast and hind cast the trajectories of oil pollution and floating objects, by using meteo-ocean data and the state of oil spill. The MEDSLIK module interface allows a user to enter all the necessary oil spill parameters, i.e. date and time, rate of spill or spill volume, forecasting time, coordinates, oil spill type, currents, wind, and waves, as well as the specification of the output parameters. The entered data are passed on to MEDSLIK; then the oil pollution characteristics are calculated for pre-defined time steps. The results of the forecast or hind cast are then visualized upon a map.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Evolution of a chemically reacting plume in a ventilated room
NASA Astrophysics Data System (ADS)
Conroy, D. T.; Smith, Stefan G. Llewellyn; Caulfield, C. P.
2005-08-01
The dynamics of a second-order chemical reaction in an enclosed space driven by the mixing produced by a turbulent buoyant plume are studied theoretically, numerically and experimentally. An isolated turbulent buoyant plume source is located in an enclosure with a single external opening. Both the source and the opening are located at the bottom of the enclosure. The enclosure is filled with a fluid of a given density with a fixed initial concentration of a chemical. The source supplies a constant volume flux of fluid of different density containing a different chemical of known and constant concentration. These two chemicals undergo a second-order non-reversible reaction, leading to the creation of a third product chemical. For simplicity, we restrict attention to the situation where the reaction process does not affect the density of the fluids involved. Because of the natural constraint of volume conservation, fluid from the enclosure is continually vented. We study the evolution of the various chemical species as they are advected by the developing ventilated filling box process within the room that is driven by the plume dynamics. In particular, we study both the mean and vertical distributions of the chemical species as a function of time within the room. We compare the results of analogue laboratory experiments with theoretical predictions derived from reduced numerical models, and find excellent agreement. Important parameters for the behaviour of the system are associated with the source volume flux and specific momentum flux relative to the source specific buoyancy flux, the ratio of the initial concentrations of the reacting chemical input in the plume and the reacting chemical in the enclosed space, the reaction rate of the chemicals and the aspect ratio of the room. Although the behaviour of the system depends on all these parameters in a non-trivial way, in general the concentration within the room of the chemical input at the isolated source passes through three distinct phases. Initially, as the source fluid flows into the room, the mean concentration of the input chemical increases due to the inflow, with some loss due to the reaction with the chemical initially within the room. After a finite time, the layer of fluid contaminated by the inflow reaches the opening to the exterior at the base of the room. During an ensuing intermediate phase, the rate of increase in the concentration of the input chemical then drops non-trivially, due to the extra sink for the input chemical of the outflow through the opening. During this intermediate stage, the concentration of the input chemical continues to rise, but at a rate that is reduced due to the reaction with the fluid in the room. Ultimately, all the fluid (and hence the chemical) that was originally within the room is lost, both through reaction and outflow through the opening, and the room approaches its final steady state, being filled completely with source fluid.
A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weerakkody, Sean; Ozel, Omur; Sinopoli, Bruno
We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the othermore » hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.« less
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
Analysis of rainfall distribution in Kelantan river basin, Malaysia
NASA Astrophysics Data System (ADS)
Che Ros, Faizah; Tosaka, Hiroyuki
2018-03-01
Using rainfall gauge on its own as input carries great uncertainties regarding runoff estimation, especially when the area is large and the rainfall is measured and recorded at irregular spaced gauging stations. Hence spatial interpolation is the key to obtain continuous and orderly rainfall distribution at unknown points to be the input to the rainfall runoff processes for distributed and semi-distributed numerical modelling. It is crucial to study and predict the behaviour of rainfall and river runoff to reduce flood damages of the affected area along the Kelantan river. Thus, a good knowledge on rainfall distribution is essential in early flood prediction studies. Forty six rainfall stations and their daily time-series were used to interpolate gridded rainfall surfaces using inverse-distance weighting (IDW), inverse-distance and elevation weighting (IDEW) methods and average rainfall distribution. Sensitivity analysis for distance and elevation parameters were conducted to see the variation produced. The accuracy of these interpolated datasets was examined using cross-validation assessment.
NASA Technical Reports Server (NTRS)
Boggs, Johnny; Birgan, Latricia J.; Tsegaye, Teferi; Coleman, Tommy; Soman, Vishwas
1997-01-01
Models are used for numerous application including hydrology. The Modular Modeling System (MMS) is one of the few that can simulate a hydrology process. MMS was tested and used to compare infiltration, soil moisture, daily temperature, and potential and actual evaporation for the Elinsboro sandy loam soil and the Mattapex silty loam soil in the Microwave Radiometer Experiment of Soil Moisture Sensing at Beltsville Agriculture Research Test Site in Maryland. An input file for each location was created to nut the model. Graphs were plotted, and it was observed that the model gave a good representation for evaporation for both plots. In comparing the two plots, it was noted that infiltration and soil moisture tend to peak around the same time, temperature peaks in July and August and the peak evaporation was observed on September 15 and July 4 for the Elinsboro Mattapex plot respectively. MMS can be used successfully to predict hydrological processes as long as the proper input parameters are available.
Stable dissipative optical vortex clusters by inhomogeneous effective diffusion.
Li, Huishan; Lai, Shiquan; Qui, Yunli; Zhu, Xing; Xie, Jianing; Mihalache, Dumitru; He, Yingji
2017-10-30
We numerically show the generation of robust vortex clusters embedded in a two-dimensional beam propagating in a dissipative medium described by the generic cubic-quintic complex Ginzburg-Landau equation with an inhomogeneous effective diffusion term, which is asymmetrical in the two transverse directions and periodically modulated in the longitudinal direction. We show the generation of stable optical vortex clusters for different values of the winding number (topological charge) of the input optical beam. We have found that the number of individual vortex solitons that form the robust vortex cluster is equal to the winding number of the input beam. We have obtained the relationships between the amplitudes and oscillation periods of the inhomogeneous effective diffusion and the cubic gain and diffusion (viscosity) parameters, which depict the regions of existence and stability of vortex clusters. The obtained results offer a method to form robust vortex clusters embedded in two-dimensional optical beams, and we envisage potential applications in the area of structured light.
Xu, Ming; Yang, Wan; Hong, Tao; Kang, TangZhen; Ji, JianHua; Wang, Ke
2017-06-01
Ultrafast all-optical flip-flop based on a passive micro Sagnac waveguide ring is studied through theoretical analysis and numerical simulation in this paper. The types of D, R-S, J-K, and T flip-flop are designed by controlling the cross-phase modulation effect of lights in this special microring. The high nonlinearity of the hollow-core photonic crystal fiber is implanted on a chip to shorten the length of the ring and reduce input power. By sensible management, the pulse width ratio of the input and the control signal, problems of pulse narrowing, and residual pedestal at the out port are solved. The parameters affecting the performance of flip-flops are optimized. The results show that the all-optical flip-flops have stable performance, low power consumption, high transmission rate (up to 100 Gb/s), and response time in picosecond order. The small size microwaveguide structure is suitable for photonic integration.
Development of metamodels for predicting aerosol dispersion in ventilated spaces
NASA Astrophysics Data System (ADS)
Hoque, Shamia; Farouk, Bakhtier; Haas, Charles N.
2011-04-01
Artificial neural network (ANN) based metamodels were developed to describe the relationship between the design variables and their effects on the dispersion of aerosols in a ventilated space. A Hammersley sequence sampling (HSS) technique was employed to efficiently explore the multi-parameter design space and to build numerical simulation scenarios. A detailed computational fluid dynamics (CFD) model was applied to simulate these scenarios. The results derived from the CFD simulations were used to train and test the metamodels. Feed forward ANN's were developed to map the relationship between the inputs and the outputs. The predictive ability of the neural network based metamodels was compared to linear and quadratic metamodels also derived from the same CFD simulation results. The ANN based metamodel performed well in predicting the independent data sets including data generated at the boundaries. Sensitivity analysis showed that particle tracking time to residence time and the location of input and output with relation to the height of the room had more impact than the other dimensionless groups on particle behavior.
NASA Astrophysics Data System (ADS)
König, Diethard; Mahmoudi, Elham; Khaledi, Kavan; von Blumenthal, Achim; Schanz, Tom
2016-04-01
The excess electricity produced by renewable energy sources available during off-peak periods of consumption can be used e.g. to produce and compress hydrogen or to compress air. Afterwards the pressurized gas is stored in the rock salt cavities. During this process, thermo-mechanical cyclic loading is applied to the rock salt surrounding the cavern. Compared to the operation of conventional storage caverns in rock salt the frequencies of filling and discharging cycles and therefore the thermo-mechanical loading cycles are much higher, e.g. daily or weekly compared to seasonally or yearly. The stress strain behavior of rock salt as well as the deformation behavior and the stability of caverns in rock salt under such loading conditions are unknown. To overcome this, existing experimental studies have to be supplemented by exploring the behavior of rock salt under combined thermo-mechanical cyclic loading. Existing constitutive relations have to be extended to cover degradation of rock salt under thermo-mechanical cyclic loading. At least the complex system of a cavern in rock salt under these loading conditions has to be analyzed by numerical modeling taking into account the uncertainties due to limited access in large depth to investigate material composition and properties. An interactive evolution concept is presented to link the different components of such a study - experimental modeling, constitutive modeling and numerical modeling. A triaxial experimental setup is designed to characterize the cyclic thermo-mechanical behavior of rock salt. The imposed boundary conditions in the experimental setup are assumed to be similar to the stress state obtained from a full-scale numerical simulation. The computational model relies primarily on the governing constitutive model for predicting the behavior of rock salt cavity. Hence, a sophisticated elasto-viscoplastic creep constitutive model is developed to take into account the dilatancy and damage progress, as well as the temperature effects. The contributed input parameters in the constitutive model are calibrated using the experimental measurements. In the following, the initial numerical simulation is modified based on the introduced constitutive model implemented in a finite element code. However, because of the significant levels of uncertainties involved in the design procedure of such structures, a reliable design can be achieved by employing probabilistic approaches. Therefore, the numerical calculation is extended by statistical tools such as sensitivity analysis, probabilistic analysis and robust reliability-based design. Uncertainties e.g. due to limited site investigation, which is always fragmentary within these depths, can be compensated by using data sets of field measurements for back calculation of input parameters with the developed numerical model. Monitoring concepts can be optimized by identifying sensor localizations e.g. using sensitivity analyses.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
SLAM, a Mathematica interface for SUSY spectrum generators
NASA Astrophysics Data System (ADS)
Marquard, Peter; Zerf, Nikolai
2014-03-01
We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.
Voss, Clifford I.; Boldt, David; Shapiro, Allen M.
1997-01-01
This report describes a Graphical-User Interface (GUI) for SUTRA, the U.S. Geological Survey (USGS) model for saturated-unsaturated variable-fluid-density ground-water flow with solute or energy transport,which combines a USGS-developed code that interfaces SUTRA with Argus ONE, a commercial software product developed by Argus Interware. This product, known as Argus Open Numerical Environments (Argus ONETM), is a programmable system with geographic-information-system-like (GIS-like) functionality that includes automated gridding and meshing capabilities for linking geospatial information with finite-difference and finite-element numerical model discretizations. The GUI for SUTRA is based on a public-domain Plug-In Extension (PIE) to Argus ONE that automates the use of ArgusONE to: automatically create the appropriate geospatial information coverages (information layers) for SUTRA, provide menus and dialogs for inputting geospatial information and simulation control parameters for SUTRA, and allow visualization of SUTRA simulation results. Following simulation control data and geospatial data input bythe user through the GUI, ArgusONE creates text files in a format required for normal input to SUTRA,and SUTRA can be executed within the Argus ONE environment. Then, hydraulic head, pressure, solute concentration, temperature, saturation and velocity results from the SUTRA simulation may be visualized. Although the GUI for SUTRA discussed in this report provides all of the graphical pre- and post-processor functions required for running SUTRA, it is also possible for advanced users to apply programmable features within Argus ONE to modify the GUI to meet the unique demands of particular ground-water modeling projects.
Investigation of the Flutter Suppression by Fuzzy Logic Control for Hypersonic Wing
NASA Astrophysics Data System (ADS)
Li, Dongxu; Luo, Qing; Xu, Rui
This paper presents a fundamental study of flutter characteristics and control performance of an aeroelastic system based on a two-dimensional double wedge wing in the hypersonic regime. Dynamic equations were established based on the modified third order nonlinear piston theory and some nonlinear structural effects are also included. A set of important parameters are observed. And then aeroelastic control law is designed to suppress the amplitude of the LCOs for the system in the sub/supercritical speed range by applying fuzzy logic control on the input of the deflection of the flap. The overall effects of the parameters on the aeroelastic system were outlined. Nonlinear aeroelastic responses in the open- and closed-loop system are obtained through numerical methods. The simulations show fuzzy logic control methods are effective in suppressing flutter and provide a smart approach for this complicated system.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging
NASA Astrophysics Data System (ADS)
Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.
2008-12-01
Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Compressed Sensing in On-Grid MIMO Radar.
Minner, Michael F
2015-01-01
The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.
Analysis on Vertical Scattering Signatures in Forestry with PolInSAR
NASA Astrophysics Data System (ADS)
Guo, Shenglong; Li, Yang; Zhang, Jingjing; Hong, Wen
2014-11-01
We apply accurate topographic phase to the Freeman-Durden decomposition for polarimetric SAR interferometry (PolInSAR) data. The cross correlation matrix obtained from PolInSAR observations can be decomposed into three scattering mechanisms matrices accounting for the odd-bounce, double-bounce and volume scattering. We estimate the phase based on the Random volume over Ground (RVoG) model, and as the initial input parameter of the numerical method which is used to solve the parameters of decomposition. In addition, the modified volume scattering model introduced by Y. Yamaguchi is applied to the PolInSAR target decomposition in forest areas rather than the pure random volume scattering as proposed by Freeman-Durden to make best fit to the actual measured data. This method can accurately retrieve the magnitude associated with each mechanism and their vertical location along the vertical dimension. We test the algorithms with L- and P- band simulated data.
CATS - A process-based model for turbulent turbidite systems at the reservoir scale
NASA Astrophysics Data System (ADS)
Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher
2016-09-01
The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.
Wu, Liviawati; Mould, Diane R; Perez Ruixo, Juan Jose; Doshi, Sameer
2015-10-01
A population pharmacokinetic pharmacodynamic (PK/PD) model describing the effect of epoetin alfa on hemoglobin (Hb) response in hemodialysis patients was developed. Epoetin alfa pharmacokinetics was described using a linear 2-compartment model. PK parameter estimates were similar to previously reported values. A maturation-structured cytokinetic model consisting of 5 compartments linked in a catenary fashion by first-order cell transfer rates following a zero-order input process described the Hb time course. The PD model described 2 subpopulations, one whose Hb response reflected epoetin alfa dosing and a second whose response was unrelated to epoetin alfa dosing. Parameter estimates from the PK/PD model were physiologically reasonable and consistent with published reports. Numerical and visual predictive checks using data from 2 studies were performed. The PK and PD of epoetin alfa were well described by the model. © 2015, The American College of Clinical Pharmacology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
3-D numerical evaluation of density effects on tracer tests.
Beinhorn, M; Dietrich, P; Kolditz, O
2005-12-01
In this paper we present numerical simulations carried out to assess the importance of density-dependent flow on tracer plume development. The scenario considered in the study is characterized by a short-term tracer injection phase into a fully penetrating well and a natural hydraulic gradient. The scenario is thought to be typical for tracer tests conducted in the field. Using a reference case as a starting point, different model parameters were changed in order to determine their importance to density effects. The study is based on a three-dimensional model domain. Results were interpreted using concentration contours and a first moment analysis. Tracer injections of 0.036 kg per meter of saturated aquifer thickness do not cause significant density effects assuming hydraulic gradients of at least 0.1%. Higher tracer input masses, as used for geoelectrical investigations, may lead to buoyancy-induced flow in the early phase of a tracer test which in turn impacts further plume development. This also holds true for shallow aquifers. Results of simulations with different tracer injection rates and durations imply that the tracer input scenario has a negligible effect on density flow. Employing model cases with different realizations of a log conductivity random field, it could be shown that small variations of hydraulic conductivity in the vicinity of the tracer injection well have a major control on the local tracer distribution but do not mask effects of buoyancy-induced flow.
Shabaev, Andrew; Lambrakos, Samuel G; Bernstein, Noam; Jacobs, Verne L; Finkenstadt, Daniel
2011-04-01
We have developed a general framework for numerical simulation of various types of scenarios that can occur for the detection of improvised explosive devices (IEDs) through the use of excitation using incident electromagnetic waves. A central component model of this framework is an S-matrix representation of a multilayered composite material system. Each layer of the system is characterized by an average thickness and an effective electric permittivity function. The outputs of this component are the reflectivity and the transmissivity as functions of frequency and angle of the incident electromagnetic wave. The input of the component is a parameterized analytic-function representation of the electric permittivity as a function of frequency, which is provided by another component model of the framework. The permittivity function is constructed by fitting response spectra calculated using density functional theory (DFT) and parameter adjustment according to any additional information that may be available, e.g., experimentally measured spectra or theory-based assumptions concerning spectral features. A prototype simulation is described that considers response characteristics for THz excitation of the high explosive β-HMX. This prototype simulation includes a description of a procedure for calculating response spectra using DFT as input to the Smatrix model. For this purpose, the DFT software NRLMOL was adopted. © 2011 Society for Applied Spectroscopy
A novel single thruster control strategy for spacecraft attitude stabilization
NASA Astrophysics Data System (ADS)
Godard; Kumar, Krishna Dev; Zou, An-Min
2013-05-01
Feasibility of achieving three axis attitude stabilization using a single thruster is explored in this paper. Torques are generated using a thruster orientation mechanism with which the thrust vector can be tilted on a two axis gimbal. A robust nonlinear control scheme is developed based on the nonlinear kinematic and dynamic equations of motion of a rigid body spacecraft in the presence of gravity gradient torque and external disturbances. The spacecraft, controlled using the proposed concept, constitutes an underactuated system (a system with fewer independent control inputs than degrees of freedom) with nonlinear dynamics. Moreover, using thruster gimbal angles as control inputs make the system non-affine (control terms appear nonlinearly in the state equation). This necessitates the control algorithms to be developed based on nonlinear control theory since linear control methods are not directly applicable. The stability conditions for the spacecraft attitude motion for robustness against uncertainties and disturbances are derived to establish the regions of asymptotic 3-axis attitude stabilization. Several numerical simulations are presented to demonstrate the efficacy of the proposed controller and validate the theoretical results. The control algorithm is shown to compensate for time-varying external disturbances including solar radiation pressure, aerodynamic forces, and magnetic disturbances; and uncertainties in the spacecraft inertia parameters. The numerical results also establish the robustness of the proposed control scheme to negate disturbances caused by orbit eccentricity.
Kiourti, Asimina; Nikita, Konstantina S
2013-04-01
We numerically assess the effects of head properties (anatomy and dielectric parameters) on the performance of a scalp-implantable antenna for telemetry in the Medical Implant Communications Service band (402.0-405.0 MHz). Safety issues and performance (resonance, radiation) are analyzed for an experimentally validated implantable antenna (volume of 203.6 mm(3) ), considering five head models (3- and 5-layer spherical, 6-, 10-, and 13-tissue anatomical) and seven scenarios (variations ± 20% in the reference permittivity and conductivity values). Simulations are carried out at 403.5 MHz using the finite-difference time-domain method. Anatomy of the head model around the implantation site is found to mainly affect antenna performance, whereas overall tissue anatomy and dielectric parameters are less significant. Compared to the reference dielectric parameter scenario within the 3-layer spherical head, maximum variations of -19.9%, +3.7%, -55.1%, and -39.2% are computed in the maximum allowable net input power imposed by the IEEE Std C95.1-1999 and Std C95.1-2005 safety guidelines, return loss, and maximum far-field gain, respectively. Compliance with the recent IEEE Std C95.1-2005 is found to be almost insensitive to head properties, in contrast with IEEE Std C95.1-1999. Taking tissue property uncertainties into account is highlighted as crucial for implantable antenna design and performance assessment. Bioelectromagnetics 34:167-179, 2013. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Goldstein, Alison; Cole, Thomas; Cordes, Sara
2016-01-01
Studies have stressed the importance of counting with children to promote formal numeracy abilities; however, little work has investigated when parents begin to engage in this behavior with their young children. In the current study, we investigated whether parents elaborated on numerical information when reading a counting book to their preverbal infants and whether developmental differences in numerical input exist even in the 1st year of life. Parents and their 5-10 months old infants were asked to read, as they would at home, two books to their infants: a counting book and another book that did not have numerical content. Parents' spontaneous statements rarely focused on number and those that did consisted primarily of counting, with little emphasis on labeling the cardinality of the set. However, developmental differences were observed even in this age range, such that parents were more likely to make numerical utterances when reading to older infants. Together, results are the first to characterize naturalistic reading behaviors between parents and their preverbal infants in the context of counting books, suggesting that although counting books promote numerical language in parents, infants still receive very little in the way of numerical input before the end of the 1st year of life. While little is known regarding the impact of number talk on the cognitive development of young infants, the current results may guide future work in this area by providing the first assessment of the characteristics of parental numerical input to preverbal infants.
NASA Astrophysics Data System (ADS)
Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi
2018-03-01
This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.
Simulation model of Al-Ti dissimilar laser welding-brazing and its experimental verification
NASA Astrophysics Data System (ADS)
Behúlová, M.; Babalová, E.; Nagy, M.
2017-02-01
Formation of dissimilar weld joints of light metals and alloys including Al-Ti joints is interesting mainly due to demands on the weight reduction and corrosion resistance of components and structures in automotive, aircraft, aeronautic and other industries. Joining of Al-Ti alloys represents quite difficult problem. Generally, the fusion welding of these materials can lead to the development of different metastable phases and formation of brittle intermetallic compounds. The paper deals with numerical simulation of the laser welding-brazing process of titanium Grade 2 and EN AW 5083 aluminum alloy sheets using the 5087 aluminum filler wire. Simulation model for welding-brazing of testing samples with the dimensions of 50 × 100 × 2 mm was developed in order to perform numerical experiments applying variable welding parameters and to design proper combination of these parameters for formation of sound Al-Ti welded-brazed joints. Thermal properties of welded materials in the dependence on temperature were computed using JMatPro software. The conical model of the heat source was exploited for description of the heat input to the weld due to the moving laser beam source. The sample cooling by convection and radiation to the surrounding air and shielding argon gas was taken into account. Developed simulation model was verified by comparison of obtained results of numerical simulation with the temperatures measured during real experiments of laser welding-brazing by the TruDisk 4002 disk laser.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Certification Testing Methodology for Composite Structure. Volume 2. Methodology Development
1986-10-01
parameter, sample size and fa- tigue test duration. The required input are 1. Residual strength Weibull shape parameter ( ALPR ) 2. Fatigue life Weibull shape...INPUT STRENGTH ALPHA’) READ(*,*) ALPR ALPRI = 1.O/ ALPR WRITE(*, 2) 2 FORMAT( 2X, ’PLEASE INPUT LIFE ALPHA’) READ(*,*) ALPL ALPLI - 1.0/ALPL WRITE(*, 3...3 FORMAT(2X,’PLEASE INPUT SAMPLE SIZE’) READ(*,*) N AN - N WRITE(*,4) 4 FORMAT(2X,’PLEASE INPUT TEST DURATION’) READ(*,*) T RALP - ALPL/ ALPR ARGR - 1
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
NASA Astrophysics Data System (ADS)
Scheffler, Christian; Psyk, Verena; Linnemann, Maik; Tulke, Marc; Brosius, Alexander; Landgrebe, Dirk
2018-05-01
High speed velocity effects in production technology provide a broad range of technological and economic advantages [1, 2]. However, exploiting them necessitates the knowledge of strain rate dependent material behavior in process modelling. In general, high speed material data characterization features several difficulties and requires sophisticated approaches in order to provide reliable material data. This paper proposes two innovative concepts with electromagnetic and pneumatic drive and an approach for material characterization in terms of strain rate dependent flow curves and parameters of failure or damage models. The test setups have been designed for investigations of strain rates up to 105 s-1. In principle, knowledge about the temporary courses and local distributions of stress and strain in the specimen is essential for identifying material characteristics, but short process times, fast changes of the measurement values, small specimen size and frequently limited accessibility of the specimen during the test hinder directly measuring these parameters at high-velocity testing. Therefore, auxiliary test parameters, which are easier to measure, are recorded and used as input data for an inverse numerical simulation that provides the desired material characteristics, e.g. the Johnson-Cook parameters, as a result. These parameters are a force equivalent strain signal on a measurement body and the displacement of the upper specimen edge.
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
Input Dependent Cell Assembly Dynamics in a Model of the Striatal Medium Spiny Neuron Network
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior. PMID:22438838
CIRMIS Data system. Volume 2. Program listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrichs, D.R.
1980-01-01
The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (OWNI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for utilization by the hydraulic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required.The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the second of four volumes of the description of the CIRMIS Data System.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrichs, D.R.
1980-01-01
The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the fourth of four volumes of the description of the CIRMIS Data System.« less
Evaluation of Tsunami Run-Up on Coastal Areas at Regional Scale
NASA Astrophysics Data System (ADS)
González, M.; Aniel-Quiroga, Í.; Gutiérrez, O.
2017-12-01
Tsunami hazard assessment is tackled by means of numerical simulations, giving as a result, the areas flooded by tsunami wave inland. To get this, some input data is required, i.e., the high resolution topobathymetry of the study area, the earthquake focal mechanism parameters, etc. The computational cost of these kinds of simulations are still excessive. An important restriction for the elaboration of large scale maps at National or regional scale is the reconstruction of high resolution topobathymetry on the coastal zone. An alternative and traditional method consists of the application of empirical-analytical formulations to calculate run-up at several coastal profiles (i.e. Synolakis, 1987), combined with numerical simulations offshore without including coastal inundation. In this case, the numerical simulations are faster but some limitations are added as the coastal bathymetric profiles are very simply idealized. In this work, we present a complementary methodology based on a hybrid numerical model, formed by 2 models that were coupled ad hoc for this work: a non-linear shallow water equations model (NLSWE) for the offshore part of the propagation and a Volume of Fluid model (VOF) for the areas near the coast and inland, applying each numerical scheme where they better reproduce the tsunami wave. The run-up of a tsunami scenario is obtained by applying the coupled model to an ad-hoc numerical flume. To design this methodology, hundreds of worldwide topobathymetric profiles have been parameterized, using 5 parameters (2 depths and 3 slopes). In addition, tsunami waves have been also parameterized by their height and period. As an application of the numerical flume methodology, the coastal parameterized profiles and tsunami waves have been combined to build a populated database of run-up calculations. The combination was tackled by means of numerical simulations in the numerical flume The result is a tsunami run-up database that considers real profiles shape, realistic tsunami waves, and optimized numerical simulations. This database allows the calculation of the run-up of any new tsunami wave by interpolation on the database, in a short period of time, based on the tsunami wave characteristics provided as an output of the NLSWE model along the coast at a large scale domain (regional or National scale).
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Triangular dislocation: an analytical, artefact-free solution
NASA Astrophysics Data System (ADS)
Nikkhoo, Mehdi; Walter, Thomas R.
2015-05-01
Displacements and stress-field changes associated with earthquakes, volcanoes, landslides and human activity are often simulated using numerical models in an attempt to understand the underlying processes and their governing physics. The application of elastic dislocation theory to these problems, however, may be biased because of numerical instabilities in the calculations. Here, we present a new method that is free of artefact singularities and numerical instabilities in analytical solutions for triangular dislocations (TDs) in both full-space and half-space. We apply the method to both the displacement and the stress fields. The entire 3-D Euclidean space {R}3 is divided into two complementary subspaces, in the sense that in each one, a particular analytical formulation fulfils the requirements for the ideal, artefact-free solution for a TD. The primary advantage of the presented method is that the development of our solutions involves neither numerical approximations nor series expansion methods. As a result, the final outputs are independent of the scale of the input parameters, including the size and position of the dislocation as well as its corresponding slip vector components. Our solutions are therefore well suited for application at various scales in geoscience, physics and engineering. We validate the solutions through comparison to other well-known analytical methods and provide the MATLAB codes.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
The effect of a realistic thermal diffusivity on numerical model of a subducting slab
NASA Astrophysics Data System (ADS)
Maierova, P.; Steinle-Neumann, G.; Cadek, O.
2010-12-01
A number of numerical studies of subducting slab assume simplified (constant or only depth-dependent) models of thermal conductivity. The available mineral physics data indicate, however, that thermal diffusivity is strongly temperature- and pressure-dependent and may also vary among different mantle materials. In the present study, we examine the influence of realistic thermal properties of mantle materials on the thermal state of the upper mantle and the dynamics of subducting slabs. On the basis of the data published in mineral physics literature we compile analytical relationships that approximate the pressure and temperature dependence of thermal diffusivity for major mineral phases of the mantle (olivine, wadsleyite, ringwoodite, garnet, clinopyroxenes, stishovite and perovskite). We propose a simplified composition of mineral assemblages predominating in the subducting slab and the surrounding mantle (pyrolite, mid-ocean ridge basalt, harzburgite) and we estimate their thermal diffusivity using the Hashin-Shtrikman bounds. The resulting complex formula for the diffusivity of each aggregate is then approximated by a simpler analytical relationship that is used in our numerical model as an input parameter. For the numerical modeling we use the Elmer software (open source finite element software for multiphysical problems, see http://www.csc.fi/english/pages/elmer). We set up a 2D Cartesian thermo-mechanical steady-state model of a subducting slab. The model is partly kinematic as the flow is driven by a boundary condition on velocity that is prescribed on the top of the subducting lithospheric plate. Reology of the material is non-linear and is coupled with the thermal equation. Using the realistic relationship for thermal diffusivity of mantle materials, we compute the thermal and flow fields for different input velocity and age of the subducting plate and we compare the results against the models assuming a constant thermal diffusivity. The importance of the realistic description of thermal properties in models of subducted slabs is discussed.
Double-Barrier Memristive Devices for Unsupervised Learning and Pattern Recognition.
Hansen, Mirko; Zahari, Finn; Ziegler, Martin; Kohlstedt, Hermann
2017-01-01
The use of interface-based resistive switching devices for neuromorphic computing is investigated. In a combined experimental and numerical study, the important device parameters and their impact on a neuromorphic pattern recognition system are studied. The memristive cells consist of a layer sequence Al/Al 2 O 3 /Nb x O y /Au and are fabricated on a 4-inch wafer. The key functional ingredients of the devices are a 1.3 nm thick Al 2 O 3 tunnel barrier and a 2.5 mm thick Nb x O y memristive layer. Voltage pulse measurements are used to study the electrical conditions for the emulation of synaptic functionality of single cells for later use in a recognition system. The results are evaluated and modeled in the framework of the plasticity model of Ziegler et al. Based on this model, which is matched to experimental data from 84 individual devices, the network performance with regard to yield, reliability, and variability is investigated numerically. As the network model, a computing scheme for pattern recognition and unsupervised learning based on the work of Querlioz et al. (2011), Sheridan et al. (2014), Zahari et al. (2015) is employed. This is a two-layer feedforward network with a crossbar array of memristive devices, leaky integrate-and-fire output neurons including a winner-takes-all strategy, and a stochastic coding scheme for the input pattern. As input pattern, the full data set of digits from the MNIST database is used. The numerical investigation indicates that the experimentally obtained yield, reliability, and variability of the memristive cells are suitable for such a network. Furthermore, evidence is presented that their strong I - V non-linearity might avoid the need for selector devices in crossbar array structures.
Christen, T.; Pannetier, NA.; Ni, W.; Qiu, D.; Moseley, M.; Schuff, N.; Zaharchuk, G.
2014-01-01
In the present study, we describe a fingerprinting approach to analyze the time evolution of the MR signal and retrieve quantitative information about the microvascular network. We used a Gradient Echo Sampling of the Free Induction Decay and Spin Echo (GESFIDE) sequence and defined a fingerprint as the ratio of signals acquired pre and post injection of an iron based contrast agent. We then simulated the same experiment with an advanced numerical tool that takes a virtual voxel containing blood vessels as input, then computes microscopic magnetic fields and water diffusion effects, and eventually derives the expected MR signal evolution. The parameters inputs of the simulations (cerebral blood volume [CBV], mean vessel radius [R], and blood oxygen saturation [SO2]) were varied to obtain a dictionary of all possible signal evolutions. The best fit between the observed fingerprint and the dictionary was then determined using least square minimization. This approach was evaluated in 5 normal subjects and the results were compared to those obtained using more conventional MR methods, steady-state contrast imaging for CBV and R and a global measure of oxygenation obtained from the superior sagittal sinus for SO2. The fingerprinting method enabled the creation of high-resolution parametric maps of the microvascular network showing expected contrast and fine details. Numerical values in gray matter (CBV=3.1±0.7%, R=12.6±2.4µm, SO2=59.5±4.7%) are consistent with literature reports and correlated with conventional MR approaches. SO2 values in white matter (53.0±4.0%) were slightly lower than expected. Numerous improvements can easily be made and the method should be useful to study brain pathologies. PMID:24321559
Machine Learning and Inverse Problem in Geodynamics
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.
2017-12-01
During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques can successfully predict the magnitude of the mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex problems in mantle dynamics by employing deep learning algorithms for estimation of mantle properties such as viscosity, elastic parameters, and thermal and chemical anomalies.
NASA Astrophysics Data System (ADS)
Doummar, J.; Kassem, A.; Gurdak, J. J.
2017-12-01
In the framework of a three-year USAID/NSF- funded PEER Science project, flow in a karst system in Lebanon (Assal Spring; discharge 0.2-2.5 m3/s yearly volume of 22-30 Mm3) dominated by snow and semi arid conditions was simulated using an integrated numerical model (Mike She 2016). The calibrated model (Nash-Sutcliffe coefficient of 0.77) is based on high resolution input data (2014-2017) and detailed catchment characterization. The approach is to assess the influence of various model parameters on recharge signals in the different hydrological karst compartments (Atmosphere, unsaturated zone, and saturated zone) based on an integrated numerical model. These parameters include precipitation intensity and magnitude, temperature, snow-melt parameters, in addition to karst specific spatially distributed features such as fast infiltration points, soil properties and thickness, topographical slopes, Epikarst and thickness of unsaturated zone, and hydraulic conductivity among others. Moreover, the model is currently simulated forward using various scenarios for future climate (Global Climate Models GCM; daily downscaled temperature and precipitation time series for Lebanon 2020-2045) in order to depict the flow rates expected in the future and the effect of climate change on hydrographs recession coefficients, discharge maxima and minima, and total spring discharge volume . Additionally, a sensitivity analysis of individual or coupled major parameters allows quantifying their impact on recharge or indirectly on the vulnerability of the system (soil thickness, soil and rock hydraulic conductivity appear to be amongst the highly sensitive parameters). This study particularly unravels the normalized single effect of rain magnitude and intensity, snow, and temperature change on the flow rate (e.g., a change of temperature of 3° on the catchment yields a Residual Mean Square Error RMSE of 0.15 m3/s in the spring discharge and a 16% error in the total annual volume with respect to the calibrated model). Finally, such a study can allow decision makers to implement best informed management practices, especially in complex karst systems, to overcome impacts of climate change on water resources.
NASA Astrophysics Data System (ADS)
Saikia, P.; Bhuyan, H.; Escalona, M.; Favre, M.; Wyndham, E.; Maze, J.; Schulze, J.
2018-01-01
The behavior of a dual frequency capacitively coupled plasma (2f CCP) driven by 2.26 and 13.56 MHz radio frequency (rf) source is investigated using an approach that integrates a theoretical model and experimental data. The basis of the theoretical analysis is a time dependent dual frequency analytical sheath model that casts the relation between the instantaneous sheath potential and plasma parameters. The parameters used in the model are obtained by operating the 2f CCP experiment (2.26 MHz + 13.56 MHz) in argon at a working pressure of 50 mTorr. Experimentally measured plasma parameters such as the electron density, electron temperature, as well as the rf current density ratios are the inputs of the theoretical model. Subsequently, a convenient analytical solution for the output sheath potential and sheath thickness was derived. A comparison of the present numerical results is done with the results obtained in another 2f CCP experiment conducted by Semmler et al (2007 Plasma Sources Sci. Technol. 16 839). A good quantitative correspondence is obtained. The numerical solution shows the variation of sheath potential with the low and high frequency (HF) rf powers. In the low pressure plasma, the sheath potential is a qualitative measure of DC self-bias which in turn determines the ion energy. Thus, using this analytical model, the measured values of the DC self-bias as a function of low and HF rf powers are explained in detail.
Powder Bed Layer Characteristics: The Overseen First-Order Process Input
NASA Astrophysics Data System (ADS)
Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.
2016-08-01
Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.
NASA Astrophysics Data System (ADS)
Xu, Y.; Jones, A. D.; Rhoades, A.
2017-12-01
Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.
Characterizing quantum phase transition by teleportation
NASA Astrophysics Data System (ADS)
Wu, Meng-He; Ling, Yi; Shu, Fu-Wen; Gan, Wen-Cong
2018-04-01
In this paper we provide a novel way to explore the relation between quantum teleportation and quantum phase transition. We construct a quantum channel with a mixed state which is made from one dimensional quantum Ising chain with infinite length, and then consider the teleportation with the use of entangled Werner states as input qubits. The fidelity as a figure of merit to measure how well the quantum state is transferred is studied numerically. Remarkably we find the first-order derivative of the fidelity with respect to the parameter in quantum Ising chain exhibits a logarithmic divergence at the quantum critical point. The implications of this phenomenon and possible applications are also briefly discussed.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
On star formation in stellar systems. I - Photoionization effects in protoglobular clusters
NASA Technical Reports Server (NTRS)
Tenorio-Tagle, G.; Bodenheimer, P.; Lin, D. N. C.; Noriega-Crespo, A.
1986-01-01
The progressive ionization and subsequent dynamical evolution of nonhomogeneously distributed low-metal-abundance diffuse gas after star formation in globular clusters are investigated analytically, taking the gravitational acceleration due to the stars into account. The basic equations are derived; the underlying assumptions, input parameters, and solution methods are explained; and numerical results for three standard cases (ionization during star formation, ionization during expansion, and evolution resulting in a stable H II region at its equilibrium Stromgren radius) are presented in graphs and characterized in detail. The time scale of residual-gas loss in typical clusters is found to be about the same as the lifetime of a massive star on the main sequence.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
The design of photovoltaic plants - An optimization procedure
NASA Astrophysics Data System (ADS)
Bartoli, B.; Cuomo, V.; Fontana, F.; Serio, C.; Silvestrini, V.
An analytical model is developed to match the components and overall size of a solar power facility (comprising photovoltaic array), maximum-power tracker, battery storage system, and inverter) to the load requirements and climatic conditions of a proposed site at the smallest possible cost. Input parameters are the efficiencies and unit costs of the components, the load fraction to be covered (for stand-alone systems), the statistically analyzed meteorological data, and the cost and efficiency data of the support system (for fuel-generator-assisted plants). Numerical results are presented in graphs and tables for sites in Italy, and it is found that the explicit form of the model equation is independent of locality, at least for this region.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Schifer, Nicholas A.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.
Goldstein, Alison; Cole, Thomas; Cordes, Sara
2016-01-01
Studies have stressed the importance of counting with children to promote formal numeracy abilities; however, little work has investigated when parents begin to engage in this behavior with their young children. In the current study, we investigated whether parents elaborated on numerical information when reading a counting book to their preverbal infants and whether developmental differences in numerical input exist even in the 1st year of life. Parents and their 5–10 months old infants were asked to read, as they would at home, two books to their infants: a counting book and another book that did not have numerical content. Parents’ spontaneous statements rarely focused on number and those that did consisted primarily of counting, with little emphasis on labeling the cardinality of the set. However, developmental differences were observed even in this age range, such that parents were more likely to make numerical utterances when reading to older infants. Together, results are the first to characterize naturalistic reading behaviors between parents and their preverbal infants in the context of counting books, suggesting that although counting books promote numerical language in parents, infants still receive very little in the way of numerical input before the end of the 1st year of life. While little is known regarding the impact of number talk on the cognitive development of young infants, the current results may guide future work in this area by providing the first assessment of the characteristics of parental numerical input to preverbal infants. PMID:27493639
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Surgical stent planning: simulation parameter study for models based on DICOM standards.
Scherer, S; Treichel, T; Ritter, N; Triebel, G; Drossel, W G; Burgert, O
2011-05-01
Endovascular Aneurysm Repair (EVAR) can be facilitated by a realistic simulation model of stent-vessel-interaction. Therefore, numerical feasibility and integrability in the clinical environment was evaluated. The finite element method was used to determine necessary simulation parameters for stent-vessel-interaction in EVAR. Input variables and result data of the simulation model were examined for their standardization using DICOM supplements. The study identified four essential parameters for the stent-vessel simulation: blood pressure, intima constitution, plaque occurrence and the material properties of vessel and plaque. Output quantities such as radial force of the stent and contact pressure between stent/vessel can help the surgeon to evaluate implant fixation and sealing. The model geometry can be saved with DICOM "Surface Segmentation" objects and the upcoming "Implant Templates" supplement. Simulation results can be stored using the "Structured Report". A standards-based general simulation model for optimizing stent-graft selection may be feasible. At present, there are limitations due to specification of individual vessel material parameters and for simulating the proximal fixation of stent-grafts with hooks. Simulation data with clinical relevance for documentation and presentation can be stored using existing or new DICOM extensions.
NASA Astrophysics Data System (ADS)
Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.
2009-04-01
Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.
NASA Technical Reports Server (NTRS)
Glasser, M. E.; Rundel, R. D.
1978-01-01
A method for formulating these changes into the model input parameters using a preprocessor program run on a programed data processor was implemented. The results indicate that any changes in the input parameters are small enough to be negligible in comparison to meteorological inputs and the limitations of the model and that such changes will not substantially increase the number of meteorological cases for which the model will predict surface hydrogen chloride concentrations exceeding public safety levels.
A new RF transmit coil for foot and ankle imaging at 7T MRI.
Santini, Tales; Kim, Junghwan; Wood, Sossena; Krishnamurthy, Narayanan; Farhat, Nadim; Maciel, Carlos; Raval, Shailesh B; Zhao, Tiejun; Ibrahim, Tamer S
2018-01-01
A four-channel Tic-Tac-Toe (TTT) transmit RF coil was designed and constructed for foot and ankle imaging at 7T MRI. Numerical simulations using an in-house developed FDTD package and experimental analyses using a homogenous phantom show an excellent agreement in terms of B 1 + field distribution and s-parameters. Simulations performed on an anatomically detailed human lower leg model demonstrated an B 1 + field distribution with a coefficient of variation (CV) of 23.9%/15.6%/28.8% and average B 1 + of 0.33μT/0.56μT/0.43μT for 1W input power (i.e., 0.25W per channel) in the ankle/calcaneus/mid foot respectively. In-vivo B 1 + mapping shows an average B 1 + of 0.29μT over the entire foot/ankle. This newly developed RF coil also presents acceptable levels of average SAR (0.07W/kg for 10g per 1W of input power) and peak SAR (0.34W/kg for 10g per 1W of input power) over the whole lower leg. Preliminary in-vivo images in the foot/ankle were acquired using the T2-DESS MRI sequence without the use of a dedicated receive-only array. Copyright © 2017. Published by Elsevier Inc.
User interface for ground-water modeling: Arcview extension
Tsou, Ming‐shu; Whittemore, Donald O.
2001-01-01
Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
SOLDESIGN user's manual copyright
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pillsbury, R.D. Jr.
1991-02-01
SOLDESIGN is a general purpose program for calculating and plotting magnetic fields, Lorentz body forces, resistances and inductances for a system of coaxial uniform current density solenoidal elements. The program was originally written in 1980 and has been evolving ever since. SOLDESIGN can be used with either interactive (terminal) or file input. Output can be to the terminal or to a file. All input is free-field with comma or space separators. SOLDESIGN contains an interactive help feature that allows the user to examine documentation while executing the program. Input to the program consists of a sequence of word commands andmore » numeric data. Initially, the geometry of the elements or coils is defined by specifying either the coordinates of one corner of the coil or the coil centroid, a symmetry parameter to allow certain reflections of the coil (e.g., a split pair), the radial and axial builds, and either the overall current density or the total ampere-turns (NI). A more general quadrilateral element is also available. If inductances or resistances are desired, the number of turns must be specified. Field, force, and inductance calculations also require the number of radial current sheets (or integration points). Work is underway to extend the field, force, and, possibly, inductances to non-coaxial solenoidal elements.« less
NASA Astrophysics Data System (ADS)
Kotb, Amer
2015-05-01
The performance of an all-optical NOR gate is numerically simulated and investigated. The NOR Boolean function is realized by using a semiconductor optical amplifier (SOA) incorporated in Mach-Zehnder interferometer (MZI) arms and exploiting the nonlinear effect of two-photon absorption (TPA). If the input pulse intensities is adjusting to be high enough, the TPA-induced phase change can be larger than the regular gain-induced phase change and hence support ultrafast operation in the dual rail switching mode. The numerical study is carried out by taking into account the effect of the amplified spontaneous emission (ASE). The dependence of the output quality factor ( Q-factor) on critical data signals and SOAs parameters is examined and assessed. The obtained results confirm that the NOR gate implemented with the proposed scheme is capable of operating at a data rate of 250 Gb/s with logical correctness and high output Q-factor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslami, E., E-mail: eeslami@iust.ac.ir; Barjasteh, A.; Morshedian, N.
2015-06-15
In this work, we numerically compare the effect of a sinusoidal, triangular, and rectangular pulsed voltage profile on the calculated particle production, electric current, and gas voltage in a dielectric barrier discharge. The total argon gas pressure of 400 Pa, the distance between dielectrics of 5 mm, the dielectric thickness of 0.7 mm, and the temperature of T = 300 K were considered as input parameters. The different driving voltage pulse shapes (triangular, rectangular, and sinusoidal) are considered as applied voltage with a frequency of 7 kHz and an amplitude of 700 V peak to peak. It is shown thatmore » applying a rectangular voltage, as compared with a sinusoidal or triangle voltage, increases the current peak, while the peak width is decreased. Higher current density is related to high production of charged particles, which leads to the generation of some highly active species, such as Ar* (4s level), and Ar** (4p level) in the gap.« less
Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre
2015-11-01
When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.
Langley Stability and Transition Analysis Code (LASTRAC) Version 1.2 User Manual
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2004-01-01
LASTRAC is a general-purposed, physics-based transition prediction code released by NASA for Laminar Flow Control studies and transition research. The design and development of the LASTRAC code is aimed at providing an engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. It was written from scratch based on the state-of-the-art numerical methods for stability analysis and modern software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory or linear parabolized stability equations method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. This document describes the governing equations, numerical methods, code development, detailed description of input/output parameters, and case studies for the current release of LASTRAC.
Enhanced response of non-Hermitian photonic systems near exceptional points
NASA Astrophysics Data System (ADS)
Sunada, Satoshi
2018-04-01
This paper theoretically and numerically studies the response characteristics of non-Hermitian resonant photonic systems operating near an exceptional point (EP), where two resonant eigenmodes coalesce. It is shown that a system near an EP can exhibit a non-Lorentzian frequency response, whose line shape and intensity strongly depend on the modal decay rate and coupling parameters for the input waves, unlike a normal Lorentzian response around a single resonance. In particular, it is shown that the peak intensity of the frequency response is inversely proportional to the fourth power of the modal decay rate and can be significantly enhanced with the aid of optical gain. The theoretical results are numerically verified by a full wave simulation of a microring cavity with gain. In addition, the effects of the nonlinear gain saturation and spontaneous emission are discussed. The response enhancement and its parametric dependence may be useful for designing and controlling the excitation of eigenmodes by external fields.
NASA Astrophysics Data System (ADS)
Havaej, Mohsen; Coggan, John; Stead, Doug; Elmo, Davide
2016-04-01
Rock slope geometry and discontinuity properties are among the most important factors in realistic rock slope analysis yet they are often oversimplified in numerical simulations. This is primarily due to the difficulties in obtaining accurate structural and geometrical data as well as the stochastic representation of discontinuities. Recent improvements in both digital data acquisition and incorporation of discrete fracture network data into numerical modelling software have provided better tools to capture rock mass characteristics, slope geometries and digital terrain models allowing more effective modelling of rock slopes. Advantages of using improved data acquisition technology include safer and faster data collection, greater areal coverage, and accurate data geo-referencing far exceed limitations due to orientation bias and occlusion. A key benefit of a detailed point cloud dataset is the ability to measure and evaluate discontinuity characteristics such as orientation, spacing/intensity and persistence. This data can be used to develop a discrete fracture network which can be imported into the numerical simulations to study the influence of the stochastic nature of the discontinuities on the failure mechanism. We demonstrate the application of digital terrestrial photogrammetry in discontinuity characterization and distinct element simulations within a slate quarry. An accurately geo-referenced photogrammetry model is used to derive the slope geometry and to characterize geological structures. We first show how a discontinuity dataset, obtained from a photogrammetry model can be used to characterize discontinuities and to develop discrete fracture networks. A deterministic three-dimensional distinct element model is then used to investigate the effect of some key input parameters (friction angle, spacing and persistence) on the stability of the quarry slope model. Finally, adopting a stochastic approach, discrete fracture networks are used as input for 3D distinct element simulations to better understand the stochastic nature of the geological structure and its effect on the quarry slope failure mechanism. The numerical modelling results highlight the influence of discontinuity characteristics and kinematics on the slope failure mechanism and the variability in the size and shape of the failed blocks.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun
2015-04-01
Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.
Simulations of Control Schemes for Inductively Coupled Plasma Sources
NASA Astrophysics Data System (ADS)
Ventzek, P. L. G.; Oda, A.; Shon, J. W.; Vitello, P.
1997-10-01
Process control issues are becoming increasingly important in plasma etching. Numerical experiments are an excellent test-bench for evaluating a proposed control system. Models are generally reliable enough to provide information about controller robustness, fitness of diagnostics. We will present results from a two dimensional plasma transport code with a multi-species plasma chemstry obtained from a global model. [1-2] We will show a correlation of external etch parameters (e.g. input power) with internal plasma parameters (e.g. species fluxes) which in turn are correlated with etch results (etch rate, uniformity, and selectivity) either by comparison to experiment or by using a phenomenological etch model. After process characterization, a control scheme can be evaluated since the relationship between the variable to be controlled (e.g. uniformity) is related to the measurable variable (e.g. a density) and external parameter (e.g. coil current). We will present an evaluation using the HBr-Cl2 system as an example. [1] E. Meeks and J. W. Shon, IEEE Trans. on Plasma Sci., 23, 539, 1995. [2] P. Vitello, et al., IEEE Trans. on Plasma Sci., 24, 123, 1996.
Automatic Boosted Flood Mapping from Satellite Data
NASA Technical Reports Server (NTRS)
Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence
2016-01-01
Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Sensitivity of a numerical wave model on wind re-analysis datasets
NASA Astrophysics Data System (ADS)
Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel
2017-03-01
Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.
Gain control through divisive inhibition prevents abrupt transition to chaos in a neural mass model.
Papasavvas, Christoforos A; Wang, Yujiang; Trevelyan, Andrew J; Kaiser, Marcus
2015-09-01
Experimental results suggest that there are two distinct mechanisms of inhibition in cortical neuronal networks: subtractive and divisive inhibition. They modulate the input-output function of their target neurons either by increasing the input that is needed to reach maximum output or by reducing the gain and the value of maximum output itself, respectively. However, the role of these mechanisms on the dynamics of the network is poorly understood. We introduce a novel population model and numerically investigate the influence of divisive inhibition on network dynamics. Specifically, we focus on the transitions from a state of regular oscillations to a state of chaotic dynamics via period-doubling bifurcations. The model with divisive inhibition exhibits a universal transition rate to chaos (Feigenbaum behavior). In contrast, in an equivalent model without divisive inhibition, transition rates to chaos are not bounded by the universal constant (non-Feigenbaum behavior). This non-Feigenbaum behavior, when only subtractive inhibition is present, is linked to the interaction of bifurcation curves in the parameter space. Indeed, searching the parameter space showed that such interactions are impossible when divisive inhibition is included. Therefore, divisive inhibition prevents non-Feigenbaum behavior and, consequently, any abrupt transition to chaos. The results suggest that the divisive inhibition in neuronal networks could play a crucial role in keeping the states of order and chaos well separated and in preventing the onset of pathological neural dynamics.
The Cellular Automata for modelling of spreading of lava flow on the earth surface
NASA Astrophysics Data System (ADS)
Jarna, A.
2012-12-01
Volcanic risk assessment is a very important scientific, political and economic issue in densely populated areas close to active volcanoes. Development of effective tools for early prediction of a potential volcanic hazard and management of crises are paramount. However, to this date volcanic hazard maps represent the most appropriate way to illustrate the geographical area that can potentially be affected by a volcanic event. Volcanic hazard maps are usually produced by mapping out old volcanic deposits, however dynamic lava flow simulation gaining popularity and can give crucial information to corroborate other methodologies. The methodology which is used here for the generation of volcanic hazard maps is based on numerical simulation of eruptive processes by the principle of Cellular Automata (CA). The python script is integrated into ArcToolbox in ArcMap (ESRI) and the user can select several input and output parameters which influence surface morphology, size and shape of the flow, flow thickness, flow velocity and length of lava flows. Once the input parameters are selected, the software computes and generates hazard maps on the fly. The results can be exported to Google Maps (.klm format) to visualize the results of the computation. For validation of the simulation code are used data from a real lava flow. Comparison of the simulation results with real lava flows mapped out from satellite images will be presented.
Aircraft Hydraulic Systems Dynamic Analysis Component Data Handbook
1980-04-01
82 13. QUINCKE TUBE ...................................... 85 14. 11EAT EXCHANGER ............. ................... 90...Input Parameters ....... ........... .7 61 )uincke Tube Input Parameters with Hole Locat ions 87 62 "rototype Quincke Tube Data ........... 89 6 3 Fo-,:ed...Elasticity (Line 3) PSI 1.6E7 FIGURE 58 HSFR INPUT DATA FOR PULSCO TYPE ACOUSTIC FILTER 84 13. QUINCKE TUBE A means to dampen acoustic noise at resonance
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
Unsteady hovering wake parameters identified from dynamic model tests, part 1
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1977-01-01
The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.
CFEST Coupled Flow, Energy & Solute Transport Version CFEST005 User’s Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Chen, Yousu; Gilca, Alex
2006-07-20
The CFEST (Coupled Flow, Energy, and Solute Transport) simulator described in this User’s Guide is a three-dimensional finite-element model used to evaluate groundwater flow and solute mass transport. Confined and unconfined aquifer systems, as well as constant and variable density fluid flows can be represented with CFEST. For unconfined aquifers, the model uses a moving boundary for the water table, deforming the numerical mesh so that the uppermost nodes are always at the water table. For solute transport, changes in concentra¬tion of a single dissolved chemical constituent are computed for advective and hydrodynamic transport, linear sorption represented by a retardationmore » factor, and radioactive decay. Although several thermal parameters described in this User’s Guide are required inputs, thermal transport has not yet been fully implemented in the simulator. Once fully implemented, transport of thermal energy in the groundwater and solid matrix of the aquifer can also be used to model aquifer thermal regimes. The CFEST simulator is written in the FORTRAN 77 language, following American National Standards Institute (ANSI) standards. Execution of the CFEST simulator is controlled through three required text input files. These input file use a structured format of associated groups of input data. Example input data lines are presented for each file type, as well as a description of the structured FORTRAN data format. Detailed descriptions of all input requirements, output options, and program structure and execution are provided in this User’s Guide. Required inputs for auxillary CFEST utilities that aide in post-processing data are also described. Global variables are defined for those with access to the source code. Although CFEST is a proprietary code (CFEST, Inc., Irvine, CA), the Pacific Northwest National Laboratory retains permission to maintain its own source, and to distribute executables to Hanford subcontractors.« less
1978-07-01
were input into the computer program. The program was numerically intergrated with time by using a fourth-order Runge-Kutta integration algorithm with...equations of motion are numerically intergrated to provide time histories of the aircraft spinning motion. A.2 EQUATIONS DEFINING THE FORCE AND MOMENT...by Cy or Cn. 50 AE DC-TR-77-126 A . 4 where EQUATIONS FOR TRANSFERRING AERODYNAMIC DATA INPUTS TO THE PROPER HORIZONTAL CENTER OF GRAVITY
Numerical Function Generators Using LUT Cascades
2007-06-01
either algebraically (for example, sinðxÞ) or as a table of input/ output values. The user defines the numerical function by using the syntax of Scilab ...defined function in Scilab or specify it directly. Note that, by changing the parser of our system, any format can be used for the design entry. First...Methods for Multiple-Valued Input Address Generators,” Proc. 36th IEEE Int’l Symp. Multiple-Valued Logic (ISMVL ’06), May 2006. [29] Scilab 3.0, INRIA-ENPC
Transcending binary logic by gating three coupled quantum dots.
Klein, Michael; Rogge, S; Remacle, F; Levine, R D
2007-09-01
Physical considerations supported by numerical solution of the quantum dynamics including electron repulsion show that three weakly coupled quantum dots can robustly execute a complete set of logic gates for computing using three valued inputs and outputs. Input is coded as gating (up, unchanged, or down) of the terminal dots. A nanosecond time scale switching of the gate voltage requires careful numerical propagation of the dynamics. Readout is the charge (0, 1, or 2 electrons) on the central dot.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
NASA Astrophysics Data System (ADS)
Hasan, M.; Helal, A.; Gabr, M.
2014-12-01
In this project, we focus on providing a computer-automated platform for a better assessment of the potential failures and retrofit measures of flood-protecting earth structures, e.g., dams and levees. Such structures play an important role during extreme flooding events as well as during normal operating conditions. Furthermore, they are part of other civil infrastructures such as water storage and hydropower generation. Hence, there is a clear need for accurate evaluation of stability and functionality levels during their service lifetime so that the rehabilitation and maintenance costs are effectively guided. Among condition assessment approaches based on the factor of safety, the limit states (LS) approach utilizes numerical modeling to quantify the probability of potential failures. The parameters for LS numerical modeling include i) geometry and side slopes of the embankment, ii) loading conditions in terms of rate of rising and duration of high water levels in the reservoir, and iii) cycles of rising and falling water levels simulating the effect of consecutive storms throughout the service life of the structure. Sample data regarding the correlations of these parameters are available through previous research studies. We have unified these criteria and extended the risk assessment in term of loss of life through the implementation of a graphical user interface to automate input parameters that divides data into training and testing sets, and then feeds them into Artificial Neural Network (ANN) tool through MATLAB programming. The ANN modeling allows us to predict risk values of flood protective structures based on user feedback quickly and easily. In future, we expect to fine-tune the software by adding extensive data on variations of parameters.
Silva, M E T; Parente, M P L; Brandão, S; Mascarenhas, T; Natal Jorge, R M
2018-04-11
The mechanical characteristics of the female pelvic floor are relevant to understand pelvic floor dysfunctions (PFD), and how they are related with changes in their biomechanical behavior. Urinary incontinence (UI) and pelvic organ prolapse (POP) are the most common pathologies, which can be associated with changes in the mechanical properties of the supportive structures in the female pelvic cavity. PFD have been studied through different methods, from experimental tensile tests using tissues from fresh female cadavers or tissues collected at the time of a transvaginal hysterectomy procedure, or by applying imaging techniques. In this work, an inverse finite element analysis (FEA) was applied to understand the passive and active behavior of the pubovisceralis muscle (PVM) during Valsalva maneuver and muscle active contraction, respectively. Individual numerical models of women without pathology, with stress UI (SUI) and POP were built based on magnetic resonance images, including the PVM and surrounding structures. The passive and active material parameters obtained for a transversely isotropic hyperelastic constitutive model were estimated for the three groups. The values for the material constants were significantly higher for the women with POP when compared with the other two groups. The PVM of women with POP showed the highest stiffness. Additionally, the influence of these parameters was analyzed by evaluating their stress-strain, and force-displacements responses. The force produced by the PVM in women with POP was 47% and 82% higher when compared to women without pathology and with SUI, respectively. The inverse FEA allowed estimating the material parameters of the PVM using input information acquired non-invasively. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
2017-05-01
ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
NASA Astrophysics Data System (ADS)
Lafolie, François; Cousin, Isabelle; Mollier, Alain; Pot, Valérie; Moitrier, Nicolas; Balesdent, Jérome; bruckler, Laurent; Moitrier, Nathalie; Nouguier, Cédric; Richard, Guy
2014-05-01
Models describing the soil functioning are valuable tools for addressing challenging issues related to agricultural production, soil protection or biogeochemical cycles. Coupling models that address different scientific fields is actually required in order to develop numerical tools able to simulate the complex interactions and feed-backs occurring within a soil profile in interaction with climate and human activities. We present here a component-based modelling platform named "VSoil", that aims at designing, developing, implementing and coupling numerical representation of biogeochemical and physical processes in soil, from the aggregate to the profile scales. The platform consists of four softwares, i) Vsoil_Processes dedicated to the conceptual description of processes and of their inputs and outputs, ii) Vsoil_Modules devoted to the development of numerical representation of elementary processes as modules, iii) Vsoil_Models which permits the coupling of modules to create models, iv) Vsoil_Player for the run of the model and the primary analysis of results. The platform is designed to be a collaborative tool, helping scientists to share not only their models, but also the scientific knowledge on which the models are built. The platform is based on the idea that processes of any kind can be described and characterized by their inputs (state variables required) and their outputs. The links between the processes are automatically detected by the platform softwares. For any process, several numerical representations (modules) can be developed and made available to platform users. When developing modules, the platform takes care of many aspects of the development task so that the user can focus on numerical calculations. Fortran2008 and C++ are the supported languages and existing codes can be easily incorporated into platform modules. Building a model from available modules simply requires selecting the processes being accounted for and for each process a module. During this task, the platform displays available modules and checks the compatibility between the modules. The model (main program) is automatically created when compatible modules have been selected for all the processes. A GUI is automatically generated to help the user providing parameters and initial situations. Numerical results can be immediately visualized, archived and exported. The platform also provides facilities to carry out sensitivity analysis. Parameters estimation and links with databases are being developed. The platform can be freely downloaded from the web site (http://www.inra.fr/sol_virtuel/) with a set of processes, variables, modules and models. However, it is designed so that any user can add its own components. Theses adds-on can be shared with co-workers by means of an export/import mechanism using the e-mail. The adds-on can also be made available to the whole community of platform users when developers asked for. A filtering tool is available to explore the content of the platform (processes, variables, modules, models).
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1996-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
NASA Astrophysics Data System (ADS)
Koliopoulos, T. C.; Koliopoulou, G.
2007-10-01
We present an input-output solution for simulating the associated behavior and optimized physical needs of an environmental system. The simulations and numerical analysis determined the accurate boundary loads and areas that were required to interact for the proper physical operation of a complicated environmental system. A case study was conducted to simulate the optimum balance of an environmental system based on an artificial intelligent multi-interacting input-output numerical scheme. The numerical results were focused on probable further environmental management techniques, with the objective of minimizing any risks and associated environmental impact to protect the quality of public health and the environment. Our conclusions allowed us to minimize the associated risks, focusing on probable cases in an emergency to protect the surrounded anthropogenic or natural environment. Therefore, the lining magnitude could be determined for any useful associated technical works to support the environmental system under examination, taking into account its particular boundary necessities and constraints.
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Using the NASA GRC Sectored-One-Dimensional Combustor Simulation
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Mehta, Vishal R.
2014-01-01
The document is a user manual for the NASA GRC Sectored-One-Dimensional (S-1-D) Combustor Simulation. It consists of three sections. The first is a very brief outline of the mathematical and numerical background of the code along with a description of the non-dimensional variables on which it operates. The second section describes how to run the code and includes an explanation of the input file. The input file contains the parameters necessary to establish an operating point as well as the associated boundary conditions (i.e. how it is fed and terminated) of a geometrically configured combustor. It also describes the code output. The third section describes the configuration process and utilizes a specific example combustor to do so. Configuration consists of geometrically describing the combustor (section lengths, axial locations, and cross sectional areas) and locating the fuel injection point and flame region. Configuration requires modifying the source code and recompiling. As such, an executable utility is included with the code which will guide the requisite modifications and insure that they are done correctly.
Human voice quality measurement in noisy environments.
Ueng, Shyh-Kuang; Luo, Cheng-Ming; Tsai, Tsung-Yu; Yeh, Hsuan-Chen
2015-01-01
Computerized acoustic voice measurement is essential for the diagnosis of vocal pathologies. Previous studies showed that ambient noises have significant influences on the accuracy of voice quality assessment. This paper presents a voice quality assessment system that can accurately measure qualities of voice signals, even though the input voice data are contaminated by low-frequency noises. The ambient noises in our living rooms and laboratories are collected and the frequencies of these noises are analyzed. Based on the analysis, a filter is designed to reduce noise level of the input voice signal. Then, improved numerical algorithms are employed to extract voice parameters from the voice signal to reveal the health of the voice signal. Compared with MDVP and Praat, the proposed method outperforms these two widely used programs in measuring fundamental frequency and harmonic-to-noise ratio, and its performance is comparable to these two famous programs in computing jitter and shimmer. The proposed voice quality assessment method is resistant to low-frequency noises and it can measure human voice quality in environments filled with noises from air-conditioners, ceiling fans and cooling fans of computers.
Computational simulation of weld microstructure and distortion by considering process mechanics
NASA Astrophysics Data System (ADS)
Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.
2009-05-01
Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.
Thermo-mechanical modeling of laser treatment on titanium cold-spray coatings
NASA Astrophysics Data System (ADS)
Paradiso, V.; Rubino, F.; Tucci, F.; Astarita, A.; Carlone, P.
2018-05-01
Titanium coatings are very attractive to several industrial fields, especially aeronautics, due to the enhanced corrosion resistance and wear properties as well as improved compatibility with carbon fiber reinforced plastic (CFRP) materials. Cold sprayed titanium coatings, among the others deposition processes, are finding a widespread use in high performance applications, whereas post-deposition treatments are often used to modify the microstructure of the cold-sprayed layer. Laser treatments allow one to noticeably increase the superficial properties of titanium coatings when the process parameters are properly set. On the other hand, the high heat input required to melt titanium particles may result in excessive temperature increase even in the substrate. This paper introduces a thermo-mechanical model to simulate the laser treatment effects on a cold sprayed titanium coating as well as the aluminium substrate. The proposed thermo-mechanical finite element model considers the transient temperature field due to the laser source and applied boundary conditions using them as input loads for the subsequent stress-strain analysis. Numerical outcomes highlighted the relevance of thermal gradients and thermally induced stresses and strains in promoting the damage of the coating.
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Hartmann, J.-M.
2017-12-01
We propose a strategy to generate parameters of the Hartmann-Tran profile (HTp) by simultaneously using first principle calculations and broadening coefficients deduced from Voigt/Lorentz fits of experimental spectra. We start from reference absorptions simulated, at pressures between 10 and 950 Torr, using the HTp with parameters recently obtained from high quality experiments for the P(1) and P(17) lines of the 3-0 band of CO in He, Ar and Kr. Using requantized Classical Molecular Dynamics Simulations (rCMDS), we calculate spectra under the same conditions. We then correct them using a single parameter deduced from Lorentzian fits of both reference and calculated absorptions at a single pressure. The corrected rCMDS spectra are then simultaneously fitted using the HTp, yielding the parameters of this model and associated spectra. Comparisons between the retrieved and input (reference) HTp parameters show a quite satisfactory agreement. Furthermore, differences between the reference spectra and those computed with the HT model fitted to the corrected-rCMDS predictions are much smaller than those obtained with a Voigt line shape. Their full amplitudes are in most cases smaller than 1%, and often below 0.5%, of the peak absorption. This opens the route to completing spectroscopic databases using calculations and the very numerous broadening coefficients available from Voigt fits of laboratory spectra.
Incorporating uncertainty in RADTRAN 6.0 input files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less
NASA Astrophysics Data System (ADS)
Novikova, Y.; Zubanov, V.
2018-01-01
The article describes the numerical investigation of the input air irregularity influence of turbofan engine on its characteristics. The investigated fan has a wide-blade, an inlet diameter about 2 meters, a pressure ratio about 1.6 and the bypass ratio about 4.8. The flow irregularity was simulated by the flap input in the fan inlet channel. Input of flap was carried out by an amount of 10 to 22,5% of the input channel diameter with increments of 2,5%. A nonlinear harmonic analysis (NLH-analysis) of NUMECA Fine/Turbo software was used to study the flow irregularity. The behavior of the calculated LPC characteristics repeats the experiment behavior, but there is a quantitative difference: the calculated efficiency and pressure ratio of booster consistent with the experimental data within 3% and 2% respectively, the calculated efficiency and pressure ratio of fan duct - within 4% and 2.5% respectively. An increasing the level of air irregularity in the input stage of the fan reduces the calculated mass flow, maximum pressure ratio and efficiency. With the value of flap input 12.5%, reducing the maximum air flow is 1.44%, lowering the maximum pressure ratio is 2.6%, efficiency decreasing is 3.1%.
Sokol, Serguei; Portais, Jean-Charles
2015-01-01
The dynamics of label propagation in a stationary metabolic network during an isotope labeling experiment can provide highly valuable information on the network topology, metabolic fluxes, and on the size of metabolite pools. However, major issues, both in the experimental set-up and in the accompanying numerical methods currently limit the application of this approach. Here, we propose a method to apply novel types of label inputs, sinusoidal or more generally periodic label inputs, to address both the practical and numerical challenges of dynamic labeling experiments. By considering a simple metabolic system, i.e. a linear, non-reversible pathway of arbitrary length, we develop mathematical descriptions of label propagation for both classical and novel label inputs. Theoretical developments and computer simulations show that the application of rectangular periodic pulses has both numerical and practical advantages over other approaches. We applied the strategy to estimate fluxes in a simulated experiment performed on a complex metabolic network (the central carbon metabolism of Escherichia coli), to further demonstrate its value in conditions which are close to those in real experiments. This study provides a theoretical basis for the rational interpretation of label propagation curves in real experiments, and will help identify the strengths, pitfalls and limitations of such experiments. The cases described here can also be used as test cases for more general numerical methods aimed at identifying network topology, analyzing metabolic fluxes or measuring concentrations of metabolites. PMID:26641860
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
NASA Astrophysics Data System (ADS)
Briones-Torres, J. A.; Pernas-Salomón, R.; Pérez-Álvarez, R.; Rodríguez-Vargas, I.
2016-05-01
Gapless bilayer graphene (GBG), like monolayer graphene, is a material system with unique properties, such as anti-Klein tunneling and intrinsic Fano resonances. These properties rely on the gapless parabolic dispersion relation and the chiral nature of bilayer graphene electrons. In addition, propagating and evanescent electron states coexist inherently in this material, giving rise to these exotic properties. In this sense, bilayer graphene is unique, since in most material systems in which Fano resonance phenomena are manifested an external source that provides extended states is required. However, from a numerical standpoint, the presence of evanescent-divergent states in the eigenfunctions linear superposition representing the Dirac spinors, leads to a numerical degradation (the so called Ωd problem) in the practical applications of the standard Coefficient Transfer Matrix (K) method used to study charge transport properties in Bilayer Graphene based multi-barrier systems. We present here a straightforward procedure based in the hybrid compliance-stiffness matrix method (H) that can overcome this numerical degradation. Our results show that in contrast to standard matrix method, the proposed H method is suitable to study the transmission and transport properties of electrons in GBG superlattice since it remains numerically stable regardless the size of the superlattice and the range of values taken by the input parameters: the energy and angle of the incident electrons, the barrier height and the thickness and number of barriers. We show that the matrix determinant can be used as a test of the numerical accuracy in real calculations.
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Trajectory Optimization of an Interstellar Mission Using Solar Electric Propulsion
NASA Technical Reports Server (NTRS)
Kluever, Craig A.
1996-01-01
This paper presents several mission designs for heliospheric boundary exploration using spacecraft with low-thrust ion engines as the primary mode of propulsion The mission design goal is to transfer a 200-kg spacecraft to the heliospheric boundary in minimum time. The mission design is a combined trajectory and propulsion system optimization problem. Trajectory design variables include launch date, launch energy, burn and coast arc switch times, thrust steering direction, and planetary flyby conditions. Propulsion system design parameters include input power and specific impulse. Both SEP and NEP spacecraft arc considered and a wide range of launch vehicle options are investigated. Numerical results are presented and comparisons with the all chemical heliospheric missions from Ref 9 are made.
Automotive Gas Turbine Power System-Performance Analysis Code
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1997-01-01
An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.
Automated Knowledge Discovery from Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.
2006-01-01
In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.
Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian
2018-05-14
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.
BaHaMAS A Bash Handler to Monitor and Administrate Simulations
NASA Astrophysics Data System (ADS)
Sciarra, Alessandro
2018-03-01
Numerical QCD is often extremely resource demanding and it is not rare to run hundreds of simulations at the same time. Each of these can last for days or even months and it typically requires a job-script file as well as an input file with the physical parameters for the application to be run. Moreover, some monitoring operations (i.e. copying, moving, deleting or modifying files, resume crashed jobs, etc.) are often required to guarantee that the final statistics is correctly accumulated. Proceeding manually in handling simulations is probably the most error-prone way and it is deadly uncomfortable and inefficient! BaHaMAS was developed and successfully used in the last years as a tool to automatically monitor and administrate simulations.
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
Data assimilation using a GPU accelerated path integral Monte Carlo approach
NASA Astrophysics Data System (ADS)
Quinn, John C.; Abarbanel, Henry D. I.
2011-09-01
The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
TBIEM3D: A Computer Program for Predicting Ducted Fan Engine Noise. Version 1.1
NASA Technical Reports Server (NTRS)
Dunn, M. H.
1997-01-01
This document describes the usage of the ducted fan noise prediction program TBIEM3D (Thin duct - Boundary Integral Equation Method - 3 Dimensional). A scattering approach is adopted in which the acoustic pressure field is split into known incident and unknown scattered parts. The scattering of fan-generated noise by a finite length circular cylinder in a uniform flow field is considered. The fan noise is modeled by a collection of spinning point thrust dipoles. The program, based on a Boundary Integral Equation Method (BIEM), calculates circumferential modal coefficients of the acoustic pressure at user-specified field locations. The duct interior can be of the hard wall type or lined. The duct liner is axisymmetric, locally reactive, and can be uniform or axially segmented. TBIEM3D is written in the FORTRAN programming language. Input to TBIEM3D is minimal and consists of geometric and kinematic parameters. Discretization and numerical parameters are determined automatically by the code. Several examples are presented to demonstrate TBIEM3D capabilities.
Nanotechnology Infrared Optics for Astronomy Missions
NASA Technical Reports Server (NTRS)
Frogel, Jay (Technical Monitor); Smith, Howard A.
2004-01-01
We have used the "MicroStripes" code (Flomerics, Inc.) to perform full-, near- and far-field diffraction modeling of metal mesh performance on substrates. Our Miles Code software, which approximates the full calculation in a quick, gui-based window, is useful as an iterative device by adjusting the input parameters (index of refraction, thickness, etc.) to provide agreement with the full calculation. However, despite the somewhat extravagant claims by the MicroStripes manufacturer, this code is also not perfect because numerous free parameters must be set. Key among these, as identified in our earlier papers and proposal documents, is the high frequency (i.e., far IR) character of the real and imaginary parts of the index of refraction of the metal mesh, the high frequency character of the real and imaginary parts of the index of refraction of the substrate, and the character of the interface between the mesh and the substrate material, and in particular the suppression (or possible enhancement) of surface effects at the interface.