Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Sensitivity of a numerical wave model on wind re-analysis datasets
NASA Astrophysics Data System (ADS)
Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel
2017-03-01
Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts
NASA Astrophysics Data System (ADS)
Yan, Kun; Cheng, Gengdong
2018-03-01
For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.
NASA Astrophysics Data System (ADS)
Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.
2018-04-01
Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.
Numerical analysis of the beam position monitor pickup for the Iranian light source facility
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2017-03-01
In this paper, we describe the design of a button type Beam Position Monitor (BPM) for the low emittance storage ring of the Iranian Light Source Facility (ILSF). First, we calculate sensitivities, induced power and intrinsic resolution based on solving Laplace equation numerically by finite element method (FEM), in order to find the potential at each point of BPM's electrode surface. After the optimization of the designed BPM, trapped high order modes (HOM), wakefield and thermal loss effects are calculated. Finally, after fabrication of BPM, it is experimentally tested by using a test-stand. The results depict that the designed BPM has a linear response in the area of 2×4 mm2 inside the beam pipe and the sensitivity of 0.080 and 0.087 mm-1 in horizontal and vertical directions. Experimental results also depict that they are in a good agreement with numerical analysis.
Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model
Barnard, Richard C.; Frank, Martin; Krycki, Kai
2016-02-09
In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less
Mehl, S.; Hill, M.C.
2001-01-01
Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.
Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality
NASA Astrophysics Data System (ADS)
Pratama, Mega Aria; Rosyidi, Cucuk Nur
2017-11-01
This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
NASA Astrophysics Data System (ADS)
Podgornova, O.; Leaney, S.; Liang, L.
2018-07-01
Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Long-range monostatic remote sensing of geomaterial structure weak vibrations
NASA Astrophysics Data System (ADS)
Heifetz, Alexander; Bakhtiari, Sasan; Gopalsami, Nachappa; Elmer, Thomas W.; Mukherjee, Souvik
2018-04-01
We study analytically and numerically signal sensitivity in remote sensing measurements of weak mechanical vibration of structures made of typical construction geomaterials, such as concrete. The analysis includes considerations of electromagnetic beam atmospheric absorption, reflection, scattering, diffraction and losses. Comparison is made between electromagnetic frequencies of 35GHz (Ka-band), 94GHz (W-band) and 260GHz (WR-3 waveguide band), corresponding to atmospheric transparency windows of the electromagnetic spectrum. Numerical simulations indicate that 94GHz frequency is optimal in terms of signal sensitivity and specificity for long-distance (>1.5km) sensing of weak multi-mode vibrations.
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
Applying geologic sensitivity analysis to environmental risk management: The financial implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, D.T.
The financial risks associated with environmental contamination can be staggering and are often difficult to identify and accurately assess. Geologic sensitivity analysis is gaining recognition as a significant and useful tool that can empower the user with crucial information concerning environmental risk management and brownfield redevelopment. It is particularly useful when (1) evaluating the potential risks associated with redevelopment of historical industrial facilities (brownfields) and (2) planning for future development, especially in areas of rapid development because the number of potential contaminating sources often increases with an increase in economic development. An examination of the financial implications relating to geologicmore » sensitivity analysis in southeastern Michigan from numerous case studies indicate that the environmental cost of contamination may be 100 to 1,000 times greater at a geologically sensitive location compared to the least sensitive location. Geologic sensitivity analysis has demonstrated that near-surface geology may influence the environmental impact of a contaminated site to a greater extent than the amount and type of industrial development.« less
Aircraft optimization by a system approach: Achievements and trends
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1992-01-01
Recently emerging methodology for optimal design of aircraft treated as a system of interacting physical phenomena and parts is examined. The methodology is found to coalesce into methods for hierarchic, non-hierarchic, and hybrid systems all dependent on sensitivity analysis. A separate category of methods has also evolved independent of sensitivity analysis, hence suitable for discrete problems. References and numerical applications are cited. Massively parallel computer processing is seen as enabling technology for practical implementation of the methodology.
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2017-04-01
Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method
Kim, DaeHee; Rhodes, Jeffrey A; Hashim, Jeffrey A; Rickabaugh, Lawrence; Brams, David M; Pinkus, Edward; Dou, Yamin
2018-06-07
Highly specific preoperative localizing test is required to select patients for minimally invasive parathyroidectomy (MIP) in lieu of traditional four-gland exploration. We hypothesized that Tc-99m sestamibi scan interpretation incorporating numerical measurements on the degree of asymmetrical activity from bilateral thyroid beds can be useful in localizing single adenoma for MIP. We devised a quantitative interpretation method for Tc-99m sestamibi scan based on the numerically graded asymmetrical activity on early phase. The numerical ratio value of each scan was obtained by dividing the number of counts from symmetrically drawn regions of interest (ROI) over bilateral thyroid beds. The final pathology and clinical outcome of 109 patients were used to perform receiver operating curve (ROC) analysis. Receiver operating curve analysis revealed the area under the curve (AUC) was calculated to be 0.71 (P = 0.0032), validating this method as a diagnostic tool. The optimal cut-off point for the ratio value with maximal combined sensitivity and specificity was found with corresponding sensitivity of 67.9% (56.5-77.2%, 95% CI) and specificity of 75.0% (52.8-91.8%, 95% CI). An additional higher cut-off with higher specificity with minimal possible sacrifice on sensitivity was also selected, yielding sensitivity of 28.6% (18.8-38.6%, 95% CI) and specificity of 90.0% (69.6-98.8%, 95% CI). Our results demonstrated that the more asymmetrical activity on the initial phase, the more successful it is to localize a single parathyroid adenoma on sestamibi scans. Using early-phase Tc-99m sestamibi scan only, we were able to select patients for minimally invasive parathyroidectomy with 90% specificity. © 2018 The Royal Australian and New Zealand College of Radiologists.
Interventional MRI: tapering improves the distal sensitivity of the loopless antenna.
Qian, Di; El-Sharkawy, AbdEl-Monem M; Atalar, Ergin; Bottomley, Paul A
2010-03-01
The "loopless antenna" is an interventional MRI detector consisting of a tuned coaxial cable and an extended inner conductor or "whip". A limitation is the poor sensitivity afforded at, and immediately proximal to, its distal end, which is exacerbated by the extended whip length when the whip is uniformly insulated. It is shown here that tapered insulation dramatically improves the distal sensitivity of the loopless antenna by pushing the current sensitivity toward the tip. The absolute signal-to-noise ratio is numerically computed by the electromagnetic method-of-moments for three resonant 3-T antennae with no insulation, uniform insulation, and with linearly tapered insulation. The analysis shows that tapered insulation provides an approximately 400% increase in signal-to-noise ratio in trans-axial planes 1 cm from the tip and a 16-fold increase in the sensitive area as compared to an equivalent, uniformly insulated antenna. These findings are directly confirmed by phantom experiments and by MRI of an aorta specimen. The results demonstrate that numerical electromagnetic signal-to-noise ratio analysis can accurately predict the loopless detector's signal-to-noise ratio and play a central role in optimizing its design. The manifold improvement in distal signal-to-noise ratio afforded by redistributing the insulation should improve the loopless antenna's utility for interventional MRI. (c) 2010 Wiley-Liss, Inc.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number
Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua
2013-01-01
In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings. PMID:24910780
The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number.
Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua
2012-02-01
In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings.
Multidisciplinary optimization of an HSCT wing using a response surface methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giunta, A.A.; Grossman, B.; Mason, W.H.
1994-12-31
Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C
2012-06-01
Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.
An optical fiber spool for laser stabilization with reduced acceleration sensitivity to 10-12/g
NASA Astrophysics Data System (ADS)
Hu, Yong-Qi; Dong, Jing; Huang, Jun-Chao; Li, Tang; Liu, Liang
2015-10-01
Environmental vibration causes mechanical deformation in optical fibers, which induces excess frequency noise in fiber-stabilized lasers. In order to solve such a problem, we propose an ultralow acceleration sensitivity fiber spool with symmetrically mounted structure. By numerical analysis with the finite element method, we obtain the optimal geometry parameters of the spool with which the horizontal and vertical acceleration sensitivity can be reduced to 3.25 × 10-12/g and 5.38 × 10-12/g respectively. Moreover, the structure features the insensitivity to the variation of geometry parameters, which will minimize the influence from numerical simulation error and manufacture tolerance. Project supported by the National Natural Science Foundation of China (Grant Nos. 11034008 and 11274324) and the Key Research Program of the Chinese Academy of Sciences (Grant No. KJZD-EW-W02).
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
Resonance ionization for analytical spectroscopy
Hurst, George S.; Payne, Marvin G.; Wagner, Edward B.
1976-01-01
This invention relates to a method for the sensitive and selective analysis of an atomic or molecular component of a gas. According to this method, the desired neutral component is ionized by one or more resonance photon absorptions, and the resultant ions are measured in a sensitive counter. Numerous energy pathways are described for accomplishing the ionization including the use of one or two tunable pulsed dye lasers.
NASA Astrophysics Data System (ADS)
Azib, M.; Baudoin, F.; Binaud, N.; Villeneuve-Faure, C.; Bugarin, F.; Segonds, S.; Teyssedre, G.
2018-04-01
Recent experimental results demonstrated that an electrostatic force distance curve (EFDC) can be used for space charge probing in thin dielectric layers. A main advantage of the method is claimed to be its sensitivity to charge localization, which, however, needs to be substantiated by numerical simulations. In this paper, we have developed a model which permits us to compute an EFDC accurately by using the most sophisticated and accurate geometry for the atomic force microscopy probe. To avoid simplifications and in order to reproduce experimental conditions, the EFDC has been simulated for a system constituted of a polarized electrode embedded in a thin dielectric layer (SiN x ). The individual contributions of forces on the tip and on the cantilever have been analyzed separately to account for possible artefacts. The EFDC sensitivity to potential distribution is studied through the change in electrode shape, namely the width and the depth. Finally, the numerical results have been compared with experimental data.
Joo, Hyun-Woo; Lee, Chang-Hwan; Rho, Jong-Seok; Jung, Hyun-Kyo
2003-08-01
In this paper, an inversion scheme for piezoelectric constants of piezoelectric transformers is proposed. The impedance of piezoelectric transducers is calculated using a three-dimensional finite element method. The validity of this is confirmed experimentally. The effects of material coefficients on piezoelectric transformers are investigated numerically. Six material coefficient variables for piezoelectric transformers were selected, and a design sensitivity method was adopted as an inversion scheme. The validity of the proposed method was confirmed by step-up ratio calculations. The proposed method is applied to the analysis of a sample piezoelectric transformer, and its resonance characteristics are obtained by numerically combined equivalent circuit method.
NASA Astrophysics Data System (ADS)
Garcea, Ralph; Leigh, Barry; Wong, R. L. M.
Reduction of interior noise in propeller-driven aircraft, to levels comparable with those obtained in jet transports, has become a leading factor in the early design stages of the new generation turboprops- and may be essential if these new designs are to succeed. The need for an analytical capability to predict interior noise is accepted throughout the turboprop aircraft industry. To this end, an analytical noise prediction program, which incorporates the SYSNOISE numerical acoustic analysis software, is under development at de Havilland. The discussion contained herein looks at the development program and how it was used in a design sensitivity analysis to optimize the structural design of the aircraft cabin for the purpose of reducing interior noise levels. This report also summarizes the validation of the SYSNOISE package using numerous classical cases from the literature.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Sensitivity analysis of linear CROW gyroscopes and comparison to a single-resonator gyroscope
NASA Astrophysics Data System (ADS)
Zamani-Aghaie, Kiarash; Digonnet, Michel J. F.
2013-03-01
This study presents numerical simulations of the maximum sensitivity to absolute rotation of a number of coupled resonator optical waveguide (CROW) gyroscopes consisting of a linear array of coupled ring resonators. It examines in particular the impact on the maximum sensitivity of the number of rings, of the relative spatial orientation of the rings (folded and unfolded), of various sequences of coupling ratios between the rings and various sequences of ring dimensions, and of the number of input/output waveguides (one or two) used to inject and collect the light. In all configurations the sensitivity is maximized by proper selection of the coupling ratio(s) and phase bias, and compared to the maximum sensitivity of a resonant waveguide optical gyroscope (RWOG) utilizing a single ring-resonator waveguide with the same radius and loss as each ring in the CROW. Simulations show that although some configurations are more sensitive than others, in spite of numerous claims to the contrary made in the literature, in all configurations the maximum sensitivity is independent of the number of rings, and does not exceed the maximum sensitivity of an RWOG. There are no sensitivity benefits to utilizing any of these linear CROWs for absolute rotation sensing. For equal total footprint, an RWOG is √N times more sensitive, and it is easier to fabricate and stabilize.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
A fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot cavity
NASA Astrophysics Data System (ADS)
Liu, Guigen; Sheng, Qiwen; Resende Lisboa Piassetta, Geraldo; Hou, Weilin; Han, Ming
2016-05-01
A hot-wire fiber-optic water flow sensor based on laser-heated silicon Fabry-Pérot interferometer (FPI) has been proposed and demonstrated in this paper. The operation of the sensor is based on the convective heat loss to water from a heated silicon FPI attached to the cleaved enface of a piece of single-mode fiber. The flow-induced change in the temperature is demodulated by the spectral shifts of the reflection fringes. An analytical model based on the FPI theory and heat transfer analysis has been developed for performance analysis. Numerical simulations based on finite element analysis have been conducted. The analytical and numerical results agree with each other in predicting the behavior of the sensor. Experiments have also been carried to demonstrate the sensing principle and verify the theoretical analysis. Investigations suggest that the sensitivity at low flow rates are much larger than that at high flow rates and the sensitivity can be easily improved by increasing the heating laser power. Experimental results show that an average sensitivity of 52.4 nm/(m/s) for the flow speed range of 1.5 mm/s to 12 mm/s was obtained with a heating power of ~12 mW, suggesting a resolution of ~1 μm/s assuming a wavelength resolution of 0.05 pm.
Convergence Estimates for Multidisciplinary Analysis and Optimization
NASA Technical Reports Server (NTRS)
Arian, Eyal
1997-01-01
A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
NASA Astrophysics Data System (ADS)
Anzai, Yosuke; Fukagata, Koji; Meliga, Philippe; Boujo, Edouard; Gallaire, François
2017-04-01
Flow around a square cylinder controlled using plasma actuators (PAs) is numerically investigated by direct numerical simulation in order to clarify the most effective location of actuator installation and to elucidate the mechanism of control effect. The Reynolds number based on the cylinder diameter and the free-stream velocity is set to be 100 to study the fundamental effect of PAs on two-dimensional vortex shedding, and three different locations of PAs are considered. The mean drag and the root-mean-square of lift fluctuations are found to be reduced by 51% and 99% in the case where two opposing PAs are aligned vertically on the rear surface. In that case, a jet flow similar to a base jet is generated by the collision of the streaming flows induced by the two opposing PAs, and the vortex shedding is completely suppressed. The simulation results are ultimately revisited in the frame of linear sensitivity analysis, whose computational cost is much lower than that of performing the full simulation. A good agreement is reported for low control amplitudes, which allows further discussion of the linear optimal arrangement for any number of PAs.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2018-07-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-05
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Numerical algorithm for optimization of positive electrode in lead-acid batteries
NASA Astrophysics Data System (ADS)
Murariu, Ancuta Teodora; Buimaga-Iarinca, Luiza; Morari, Cristian
2017-12-01
The positive electrode in lead-acid batteries is one of the most sensitive parts of the whole battery, since it is affected by various aggresive chemical processes during its life. Therefore, an optimal design of the positive electrode of the battery may have as efect a dramatic improvement of the properties of the battery - such as total capacity or endurance during its life. Our efforts dedicated to this goal cover a range of rather complex tasks, from the design based on numerical analysis to statistic analysis. We present the structure of the software implementation and the results obtained for three types of positive electrodes.
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion
NASA Astrophysics Data System (ADS)
Cercato, Michele
2018-04-01
The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
Sensitivity of control-augmented structure obtained by a system decomposition method
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat
1988-01-01
The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.
Unsteady characteristics of low-Re flow past two tandem cylinders
NASA Astrophysics Data System (ADS)
Zhang, Wei; Dou, Hua-Shu; Zhu, Zuchao; Li, Yi
2018-06-01
This study investigated the two-dimensional flow past two tandem circular or square cylinders at Re = 100 and D / d = 4-10, where D is the center-to-center distance and d is the cylinder diameter. Numerical simulation was performed to comparably study the effect of cylinder geometry and spacing on the aerodynamic characteristics, unsteady flow patterns, time-averaged flow characteristics and flow unsteadiness. We also provided the first global linear stability analysis and sensitivity analysis on the physical problem for the potential application of flow control. The objective of this work is to quantitatively identify the effect of the cylinder geometry and spacing on the characteristic quantities. Numerical results reveal that there is wake flow transition for both geometries depending on the spacing. The characteristic quantities, including the time-averaged and fluctuating streamwise velocity and pressure coefficient, are quite similar to that of the single cylinder case for the upstream cylinder, while an entirely different variation pattern is observed for the downstream cylinder. The global linear stability analysis shows that the spatial structure of perturbation is mainly observed in the wake of the downstream cylinder for small spacing, while moves upstream with reduced size and is also observed after the upstream cylinder for large spacing. The sensitivity analysis reflects that the temporal growth rate of perturbation is the most sensitive to the near-wake flow of downstream cylinder for small spacing and upstream cylinder for large spacing.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles
NASA Astrophysics Data System (ADS)
Kaur, Ramanpreet; Sharma, Sapna
2018-06-01
Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.
Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter
NASA Technical Reports Server (NTRS)
Smith, Stephen J.
2007-01-01
We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.
Evaluation and Sensitivity Analysis of an Ocean Model Response to Hurricane Ivan (PREPRINT)
2009-05-18
analysis of upper-limb meridional overturning circulation interior ocean pathways in the tropical/subtropical Atlantic . In: Interhemispheric Water...diminishing returns are encountered when either resolution is increased. 3 1. Introduction Coupled ocean-atmosphere general circulation models have become...northwest Caribbean Sea 4 and GOM. Evaluation is difficult because ocean general circulation models incorporate a large suite of numerical algorithms
NASA Technical Reports Server (NTRS)
Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.
1992-01-01
The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
A comparative analysis of numerical approaches to the mechanics of elastic sheets
NASA Astrophysics Data System (ADS)
Taylor, Michael; Davidovitch, Benny; Qiu, Zhanlong; Bertoldi, Katia
2015-06-01
Numerically simulating deformations in thin elastic sheets is a challenging problem in computational mechanics due to destabilizing compressive stresses that result in wrinkling. Determining the location, structure, and evolution of wrinkles in these problems has important implications in design and is an area of increasing interest in the fields of physics and engineering. In this work, several numerical approaches previously proposed to model equilibrium deformations in thin elastic sheets are compared. These include standard finite element-based static post-buckling approaches as well as a recently proposed method based on dynamic relaxation, which are applied to the problem of an annular sheet with opposed tractions where wrinkling is a key feature. Numerical solutions are compared to analytic predictions of the ground state, enabling a quantitative evaluation of the predictive power of the various methods. Results indicate that static finite element approaches produce local minima that are highly sensitive to initial imperfections, relying on a priori knowledge of the equilibrium wrinkling pattern to generate optimal results. In contrast, dynamic relaxation is much less sensitive to initial imperfections and can generate low-energy solutions for a wide variety of loading conditions without requiring knowledge of the equilibrium solution beforehand.
Modified GMDH-NN algorithm and its application for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Song, Shufang; Wang, Lu
2017-11-01
Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.
Arnold, L.R.; Langer, William H.; Paschke, Suzanne Smith
2003-01-01
Analytical solutions and numerical models were used to predict the extent of steady-state drawdown caused by mining of aggregate below the water table in hypothetical sand-and-gravel and fractured crystalline-rock aquifers representative of hydrogeologic settings in the Front Range area of Colorado. Analytical solutions were used to predict the extent of drawdown under a wide range of hydrologic and mining conditions that assume aquifer homogeneity, isotropy, and infinite extent. Numerical ground-water flow models were used to estimate the extent of drawdown under conditions that consider heterogeneity, anisotropy, and hydrologic boundaries and to simulate complex or unusual conditions not readily simulated using analytical solutions. Analytical simulations indicated that the drawdown radius (or distance) of influence increased as horizontal hydraulic conductivity of the aquifer, mine penetration of the water table, and mine radius increased; radius of influence decreased as aquifer recharge increased. Sensitivity analysis of analytical simulations under intermediate conditions in sand-and-gravel and fractured crystalline-rock aquifers indicated that the drawdown radius of influence was most sensitive to mine penetration of the water table and least sensitive to mine radius. Radius of influence was equally sensitive to changes in horizontal hydraulic conductivity and recharge. Numerical simulations of pits in sand-and- gravel aquifers indicated that the area of influence in a vertically anisotropic sand-and-gravel aquifer of medium size was nearly identical to that in an isotropic aquifer of the same size. Simulated area of influence increased as aquifer size increased and aquifer boundaries were farther away from the pit, and simulated drawdown was greater near the pit when aquifer boundaries were close to the pit. Pits simulated as lined with slurry walls caused mounding to occur upgradient from the pits and drawdown to occur downgradient from the pits. Pits simulated as refilled with water and undergoing evaporative losses had little hydro- logic effect on the aquifer. Numerical sensitivity analyses for simulations of pits in sand-and-gravel aquifers indicated that simulated head was most sensitive to horizontal hydraulic conductivity and the hydraulic conductance of general-head boundaries in the models. Simulated head was less sensitive to riverbed conductance and recharge and relatively insensitive to vertical hydraulic conductivity. Numerical simulations of quarries in fractured crystalline-rock aquifers indicated that the area of influence in a horizontally anisotropic aquifer was elongated in the direction of higher horizontal hydraulic conductivity and shortened in the direction of lower horizontal hydraulic conductivity compared to area of influence in a homogeneous, isotropic aquifer. Area of influence was larger in an aquifer with ground-water flow in deep, low-permeability fractures than in a homogeneous, isotropic aquifer. Area of influence was larger for a quarry intersected by a hydraulically conductive fault zone and smaller for a quarry intersected by a low-conductivity fault zone. Numerical sensitivity analyses for simulations of quarries in fractured crystalline-rock aquifers indicated simulated head was most sensitive to variations in recharge and horizontal hydraulic conductivity, had little sensitivity to vertical hydraulic conductivity and drain cells used to simulate valleys, and was relatively insensitive to drain cells used to simulate the quarry.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Torsional vibration of a cracked rod by variational formulation and numerical analysis
NASA Astrophysics Data System (ADS)
Chondros, T. G.; Labeas, G. N.
2007-04-01
The torsional vibration of a circumferentially cracked cylindrical shaft is studied through an "exact" analytical solution and a numerical finite element (FE) analysis. The Hu-Washizu-Barr variational formulation is used to develop the differential equation and the boundary conditions of the cracked rod. The equations of motion for a uniform cracked rod in torsional vibration are derived and solved, and the Rayleigh quotient is used to further approximate the natural frequencies of the cracked rod. Results for the problem of the torsional vibration of a cylindrical shaft with a peripheral crack are provided through an analytical solution based on variational formulation to derive the equation of motion and a numerical analysis utilizing a parametric three-dimensional (3D) solid FE model of the cracked rod. The crack is modelled as a continuous flexibility based on fracture mechanics principles. The variational formulation results are compared with the FE alternative. The sensitivity of the FE discretization with respect to the analytical results is assessed.
Perturbation solutions of combustion instability problems
NASA Technical Reports Server (NTRS)
Googerdy, A.; Peddieson, J., Jr.; Ventrice, M.
1979-01-01
A method involving approximate modal analysis using the Galerkin method followed by an approximate solution of the resulting modal-amplitude equations by the two-variable perturbation method (method of multiple scales) is applied to two problems of pressure-sensitive nonlinear combustion instability in liquid-fuel rocket motors. One problem exhibits self-coupled instability while the other exhibits mode-coupled instability. In both cases it is possible to carry out the entire linear stability analysis and significant portions of the nonlinear stability analysis in closed form. In the problem of self-coupled instability the nonlinear stability boundary and approximate forms of the limit-cycle amplitudes and growth and decay rates are determined in closed form while the exact limit-cycle amplitudes and growth and decay rates are found numerically. In the problem of mode-coupled instability the limit-cycle amplitudes are found in closed form while the growth and decay rates are found numerically. The behavior of the solutions found by the perturbation method are in agreement with solutions obtained using complex numerical methods.
USDA-ARS?s Scientific Manuscript database
Accurate prediction of pesticide volatilization is important for the protection of human and environmental health. Due to the complexity of the volatilization process, sophisticated predictive models are needed, especially for dry soil conditions. A mathematical model was developed to allow simulati...
NASA Astrophysics Data System (ADS)
Pollard, Thomas B
Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity using uniform-electrode and shear-horizontal mode configurations on potassium-niobate, langasite, and quartz substrates. Optimum configurations are determined yielding maximum sensitivity. Results show mode propagation-loss and sensitivity to viscosity are correlated by a factor independent of substrate material. The analysis is useful for designing devices meeting sensitivity and signal level requirements. A novel, rapid and precise microfluidic chamber alignment/bonding method was developed for SAW platforms. The package is shown to have little effect on device performance and permits simple macrofluidic interfacing. Lastly, prototypes were designed, fabricated, and tested for viscosity and biosensor applications; results show ability to detect as low as 1% glycerol in water and surface-bound DNA crosslinking.
NASA Astrophysics Data System (ADS)
Hanoca, P.; Ramakrishna, H. V.
2018-03-01
This work is related to develop a methodology to model and simulate the TEHD using the sequential application of CFD and CSD. The FSI analyses are carried out using ANSYS Workbench. In this analysis steady state, 3D Navier-Stoke equations along with energy equation are solved. Liquid properties are introduced where the viscosity and density are the function of pressure and temperature. The cavitation phenomenon is adopted in the analysis. Numerical analysis has been carried at different speeds and surfaces temperatures. During the analysis, it was found that as speed increases, hydrodynamic pressures will also increases. The pressure profile obtained from the Roelands equation is more sensitive to the temperature as compared to the Barus equation. The stress distributions specify the significant positions in the bearing structure. The developed method is capable of giving latest approaching into the physics of elasto hydrodynamic lubrication.
Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald; El-Azab, Anter; Pernice, Michael
2017-03-23
In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten
2016-06-08
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
NASA Astrophysics Data System (ADS)
Balagansky, I. A.; Stepanov, A. A.
2016-03-01
Results of numerical research into the desensitization of high explosive charges in water gap test-based experimental assemblies are presented. The experimental data are discussed, and the analysis using ANSYS AUTODYN 14.5 is provided. The desensitization phenomenon is well reproduced in numerical simulation using the JWL EOS and the Lee-Tarver kinetic equation for modeling of the initiation of heterogeneous high explosives with as well as without shock front waves. The analysis of the wave processes occurring during the initiation of the acceptor HE charge has been carried out. Peculiarities of the wave processes in the water gap test assemblies, which can influence the results of sensitivity measurement, have been studied. In particular, it has been established that precursor waves in the walls of the gap test assemblies can influence the detonation transmission distance.
Numerical and experimental study on buckling and postbuckling behavior of cracked cylindrical shells
NASA Astrophysics Data System (ADS)
Saemi, J.; Sedighi, M.; Shariati, M.
2015-09-01
The effect of crack on load-bearing capacity and buckling behavior of cylindrical shells is an essential consideration in their design. In this paper, experimental and numerical buckling analysis of steel cylindrical shells of various lengths and diameters with cracks have been studied using the finite element method, and the effect of crack position, crack orientation and the crack length-to-cylindrical shell perimeter ( λ = a/(2 πr)) and shell length-to-diameter ( L/ D) ratios on the buckling and post-buckling behavior of cylindrical shells has been investigated. For several specimens, buckling test was performed using an INSTRON 8802 servo hydraulic machine, and the results of experimental tests were compared to numerical results. A very good correlation was observed between numerical simulation and experimental results. Finally, based on the experimental and numerical results, sensitivity of the buckling load to the shell length, crack length and orientation has also been investigated.
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
NASA Astrophysics Data System (ADS)
Rahman, M. Saifur; Anower, Md. Shamim; Hasan, Md. Rabiul; Hossain, Md. Biplob; Haque, Md. Ismail
2017-08-01
We demonstrate a highly sensitive Au-MoS2-Graphene based hybrid surface plasmon resonance (SPR) biosensor for the detection of DNA hybridization. The performance parameters of the proposed sensor are investigated in terms of sensitivity, detection accuracy and quality factor at operating wavelength of 633 nm. We observed in the numerical study that sensitivity can be greatly increased by adding MoS2 layer in the middle of a Graphene-on-Au layer. It is shown that by using single layer of MoS2 in between gold and graphene layer, the proposed biosensor exhibits simultaneously high sensitivity of 87.8 deg/RIU, high detection accuracy of 1.28 and quality factor of 17.56 with gold layer thickness of 50 nm. This increased performance is due to the absorption ability and optical characteristics of graphene biomolecules and high fluorescence quenching ability of MoS2. On the basis of changing in SPR angle and minimum reflectance, the proposed sensor can sense nucleotides bonding happened between double-stranded DNA (dsDNA) helix structures. Therefore, this sensor can successfully detect the hybridization of target DNAs to the probe DNAs pre-immobilized on the Au-MoS2-Graphene hybrid with capability of distinguishing single-base mismatch.
Khan, Farman U; Qamar, Shamsul
2017-05-01
A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
NASA Astrophysics Data System (ADS)
Khanmirza, E.; Jamalpoor, A.; Kiani, A.
2017-10-01
In this paper, a magneto-electro-elastic nanoplate resting on a visco-Pasternak medium with added concentrated nanoparticles is presented as a mass nanosensor according to the vibration analysis. The MEE nanoplate is supposed to be subject to external electric voltage and magnetic potential. In order to take into account the size effect on the sensitivity of the sensor, the nonlocal elasticity theory in conjunction with the Kirchhoff plate theory is applied. Partial differential equations are derived by implementing Hamilton's variational principle. Equilibrium equations were solved analytically to determine an explicit closed-form statement for both the damped frequency shift and the relative damped frequency shift using Navier's approach. A genetic algorithm (GA) is employed to achieve the optimal added nanoparticle location to gain the most sensitivity performance of the nanosensor. Numerical studies are performed to illustrate the variation of the sensitivity property corresponding to various values of the number of attached nanoparticles, the mass of each nanoparticle, the nonlocal parameter, external electric voltage and magnetic potential, the aspect ratio, and visco-Pasternak parameters. Some numerical outcomes of this paper show that the minimum value of the damped frequency shift occurs for a certain value of the length-to-thickness ratio. Also, it is shown that the external magnetic and external electric potentials have a different effect on the sensitivity property. It is anticipated that the results reported in this work can be considered as a benchmark in future micro-structures issues.
Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds
NASA Astrophysics Data System (ADS)
Akyurek, Bengu Ozge
Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane
2017-11-07
This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.
A sensitivity equation approach to shape optimization in fluid flows
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1994-01-01
A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
USDA-ARS?s Scientific Manuscript database
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on phosphorus (P) loadings from agricultural fields. However, tools that simulate both surface and subsurface P pathways are limited and have not been robustly evaluated in tile-drained...
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science chemical transport model (CTM) capable of simulating the emission, transport and fate of numerous air pollutants. Similarly, the Weather Research and Forecasting (WRF) model is a state-of-the-science mete...
Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography
Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.
2016-01-01
A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012
Analysis of single quantum-dot mobility inside 1D nanochannel devices
NASA Astrophysics Data System (ADS)
Hoang, H. T.; Segers-Nolten, I. M.; Tas, N. R.; van Honschoten, J. W.; Subramaniam, V.; Elwenspoek, M. C.
2011-07-01
We visualized individual quantum dots using a combination of a confining nanochannel and an ultra-sensitive microscope system, equipped with a high numerical aperture lens and a highly sensitive camera. The diffusion coefficients of the confined quantum dots were determined from the experimentally recorded trajectories according to the classical diffusion theory for Brownian motion in two dimensions. The calculated diffusion coefficients were three times smaller than those in bulk solution. These observations confirm and extend the results of Eichmann et al (2008 Langmuir 24 714-21) to smaller particle diameters and more narrow confinement. A detailed analysis shows that the observed reduction in mobility cannot be explained by conventional hydrodynamic theory.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
Exhaled breath condensate – from an analytical point of view
Dodig, Slavica; Čepelak, Ivana
2013-01-01
Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
Improved numerical solutions for chaotic-cancer-model
NASA Astrophysics Data System (ADS)
Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair
2017-01-01
In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Resonance analysis of a high temperature piezoelectric disc for sensitivity characterization.
Bilgunde, Prathamesh N; Bond, Leonard J
2018-07-01
Ultrasonic transducers for high temperature (200 °C+) applications are a key enabling technology for advanced nuclear power systems and in a range of chemical and petro-chemical industries. Design, fabrication and optimization of such transducers using piezoelectric materials remains a challenge. In this work, experimental data-based analysis is performed to investigate the fundamental causal factors for the resonance characteristics of a piezoelectric disc at elevated temperatures. The effect of all ten temperature-dependent piezoelectric constants (ε 33 , ε 11 , d 33 , d 31 , d 15 , s 11 , s 12 , s 13 , s 33 , s 44 ) is studied numerically on both the radial and thickness mode resonances of a piezoelectric disc. A sensitivity index is defined to quantify the effect of each of the temperature-dependent coefficients on the resonance modes of the modified lead zirconium titanate disc. The temperature dependence of s 33 showed highest sensitivity towards the thickness resonance mode followed by ε 33 , s 11 , s 13 , s 12 , d 31 , d 33 , s 44 , ε 11 , and d 15 in the decreasing order of the sensitivity index. For radial resonance modes, the temperature dependence of ε 33 showed highest sensitivity index followed by s 11 , s 12 and d 31 coefficient. This numerical study demonstrates that the magnitude of d 33 is not the sole factor that affects the resonance characteristics of the piezoelectric disc at high temperatures. It appears that there exists a complex interplay between various temperature dependent piezoelectric coefficients that causes reduction in the thickness mode resonance frequencies which is found to be agreement in with the experimental data at an elevated temperature. Copyright © 2018 Elsevier B.V. All rights reserved.
Huang, Jiacong; Gao, Junfeng; Yan, Renhua
2016-08-15
Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Poonia, Mahaveer Prasad
2012-08-01
In this paper, we show a procedure for solving multilevel fractional programming problems in a large hierarchical decentralized organization using fuzzy goal programming approach. In the proposed method, the tolerance membership functions for the fuzzily described numerator and denominator part of the objective functions of all levels as well as the control vectors of the higher level decision makers are respectively defined by determining individual optimal solutions of each of the level decision makers. A possible relaxation of the higher level decision is considered for avoiding decision deadlock due to the conflicting nature of objective functions. Then, fuzzy goal programming approach is used for achieving the highest degree of each of the membership goal by minimizing negative deviational variables. We also provide sensitivity analysis with variation of tolerance values on decision vectors to show how the solution is sensitive to the change of tolerance values with the help of a numerical example.
Low Reynolds number numerical solutions of chaotic flow
NASA Technical Reports Server (NTRS)
Pulliam, Thomas H.
1989-01-01
Numerical computations of two-dimensional flow past an airfoil at low Mach number, large angle of attack, and low Reynolds number are reported which show a sequence of flow states leading from single-period vortex shedding to chaos via the period-doubling mechanism. Analysis of the flow in terms of phase diagrams, Poincare sections, and flowfield variables are used to substantiate these results. The critical Reynolds number for the period-doubling bifurcations is shown to be sensitive to mesh refinement and the influence of large amounts of numerical dissipation. In extreme cases, large amounts of added dissipation can delay or completely eliminate the chaotic response. The effect of artificial dissipation at these low Reynolds numbers is to produce a new effective Reynolds number for the computations.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
NASA Astrophysics Data System (ADS)
Islam, Syed K.; Cheng, Yin Pak; Birke, Ronald L.; Green, Omar; Kubic, Thomas; Lombardi, John R.
2018-04-01
The application of surface enhanced Raman scattering (SERS) has been reported as a fast and sensitive analytical method in the trace detection of the two most commonly known synthetic cannabinoids AMB-FUBINACA and alpha-pyrrolidinovalerophenone (α-PVP). FUBINACA and α-PVP are two of the most dangerous synthetic cannabinoids which have been reported to cause numerous deaths in the United States. While instruments such as GC-MS, LC-MS have been traditionally recognized as analytical tools for the detection of these synthetic drugs, SERS has been recently gaining ground in the analysis of these synthetic drugs due to its sensitivity in trace analysis and its effectiveness as a rapid method of detection. This present study shows the limit of detection of a concentration as low as picomolar for AMB-FUBINACA while for α-PVP, the limit of detection is in nanomolar concentration using SERS.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
NASA Astrophysics Data System (ADS)
Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.
2018-05-01
The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
Analysis of airfoil leading edge separation bubbles
NASA Technical Reports Server (NTRS)
Carter, J. E.; Vatsa, V. N.
1982-01-01
A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.
High S/N Ratio Slotted Step Piezoresistive Microcantilever Designs for Biosensors
Ansari, Mohd Zahid; Cho, Chongdu
2013-01-01
This study proposes new microcantilever designs in slotted step configuration to improve the S/N ratio of surface stress-based sensors used in physical, chemical, biochemical and biosensor applications. The cantilevers are made of silicon dioxide with a u-shaped silicon piezoresistor in p-doped. The cantilever step length and piezoresistor length is varied along with the operating voltage to characterise the surface stress sensitivity and thermal drifting sensitivity of the cantilevers when used as immunosensor. The numerical analysis is performed using ANSYS Multiphysics. Results show the surface stress sensitivity and the S/N ratio of the slotted step cantilevers is improved by more than 32% and 22%, respectively, over its monolithic counterparts. PMID:23535637
High S/N ratio slotted step piezoresistive microcantilever designs for biosensors.
Ansari, Mohd Zahid; Cho, Chongdu
2013-03-26
This study proposes new microcantilever designs in slotted step configuration to improve the S/N ratio of surface stress-based sensors used in physical, chemical, biochemical and biosensor applications. The cantilevers are made of silicon dioxide with a u-shaped silicon piezoresistor in p-doped. The cantilever step length and piezoresistor length is varied along with the operating voltage to characterise the surface stress sensitivity and thermal drifting sensitivity of the cantilevers when used as immunosensor. The numerical analysis is performed using ANSYS Multiphysics. Results show the surface stress sensitivity and the S/N ratio of the slotted step cantilevers is improved by more than 32% and 22%, respectively, over its monolithic counterparts.
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
Cavity-Enhanced Absorption Spectroscopy and Photoacoustic Spectroscopy for Human Breath Analysis
NASA Astrophysics Data System (ADS)
Wojtas, J.; Tittel, F. K.; Stacewicz, T.; Bielecki, Z.; Lewicki, R.; Mikolajczyk, J.; Nowakowski, M.; Szabra, D.; Stefanski, P.; Tarka, J.
2014-12-01
This paper describes two different optoelectronic detection techniques: cavity-enhanced absorption spectroscopy and photoacoustic spectroscopy. These techniques are designed to perform a sensitive analysis of trace gas species in exhaled human breath for medical applications. With such systems, the detection of pathogenic changes at the molecular level can be achieved. The presence of certain gases (biomarkers), at increased concentration levels, indicates numerous human diseases. Diagnosis of a disease in its early stage would significantly increase chances for effective therapy. Non-invasive, real-time measurements, and high sensitivity and selectivity, capable of minimum discomfort for patients, are the main advantages of human breath analysis. At present, monitoring of volatile biomarkers in breath is commonly useful for diagnostic screening, treatment for specific conditions, therapy monitoring, control of exogenous gases (such as bacterial and poisonous emissions), as well as for analysis of metabolic gases.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Local amplification of storm surge by Super Typhoon Haiyan in Leyte Gulf.
Mori, Nobuhito; Kato, Masaya; Kim, Sooyoul; Mase, Hajime; Shibutani, Yoko; Takemi, Tetsuya; Tsuboki, Kazuhisa; Yasuda, Tomohiro
2014-07-28
Typhoon Haiyan, which struck the Philippines in November 2013, was an extremely intense tropical cyclone that had a catastrophic impact. The minimum central pressure of Typhoon Haiyan was 895 hPa, making it the strongest typhoon to make landfall on a major island in the western North Pacific Ocean. The characteristics of Typhoon Haiyan and its related storm surge are estimated by numerical experiments using numerical weather prediction models and a storm surge model. Based on the analysis of best hindcast results, the storm surge level was 5-6 m and local amplification of water surface elevation due to seiche was found to be significant inside Leyte Gulf. The numerical experiments show the coherent structure of the storm surge profile due to the specific bathymetry of Leyte Gulf and the Philippines Trench as a major contributor to the disaster in Tacloban. The numerical results also indicated the sensitivity of storm surge forecast.
NASA Astrophysics Data System (ADS)
Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Alemic, A.; Allen, B.; Allocca, A.; Amariutei, D.; Andersen, M.; Anderson, R.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barbet, M.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Bauchrowitz, J.; Bauer, Th S.; Behnke, B.; Bejger, M.; Beker, M. G.; Belczynski, C.; Bell, A. S.; Bell, C.; Bergmann, G.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biscans, S.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bloemen, S.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bosi, L.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Buchman, S.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burman, R.; Buskulic, D.; Buy, C.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C.; Colombini, M.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corpuz, A.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; Debreczeni, G.; Degallaix, J.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Donath, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dossa, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dwyer, S.; Eberle, T.; Edo, T.; Edwards, M.; Effler, A.; Eggenstein, H.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Gaonkar, S.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Gräf, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hart, M.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Hooper, S.; Hopkins, P.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jaranowski, P.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karlen, J.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Keiser, G. M.; Keitel, D.; Kelley, D. B.; Kells, W.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, C.; Kim, K.; Kim, N.; Kim, N. G.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Leonardi, M.; Leong, J. R.; Le Roux, A.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B.; Lewis, J.; Li, T. G. F.; Libbrecht, K.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M. J.; Lück, H.; Luijten, E.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Maglione, C.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; McLin, K.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyers, P.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Milde, S.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moesta, P.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nelemans, G.; Neri, I.; Neri, M.; Newton, G.; Nguyen, T.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palashov, O.; Palomba, C.; Pan, H.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J.; Poggiani, R.; Poteomkin, A.; Powell, J.; Prasad, J.; Premachandra, S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Qin, J.; Quetschke, V.; Quintero, E.; Quiroga, G.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Ramirez, K.; Rapagnani, P.; Raymond, V.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Reid, S.; Reitze, D. H.; Rhoades, E.; Ricci, F.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J. R.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Scheuer, J.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Stebbins, J.; Steinlechner, J.; Steinlechner, S.; Stephens, B. C.; Steplewski, S.; Stevenson, S.; Stone, R.; Stops, D.; Strain, K. A.; Straniero, N.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Urbanek, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S. S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Wang, M.; Wang, X.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Williams, K.; Williams, L.; Williams, R.; Williams, T.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yang, Z.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhao, C.; Zhu, X. J.; Zucker, M. E.; Zuraw, S.; Zweizig, J.; Boyle, M.; Brügmann, B.; Buchman, L. T.; Campanelli, M.; Chu, T.; Etienne, Z. B.; Hannam, M.; Healy, J.; Hinder, I.; Kidder, L. E.; Laguna, P.; Liu, Y. T.; London, L.; Lousto, C. O.; Lovelace, G.; MacDonald, I.; Marronetti, P.; Mösta, P.; Müller, D.; Mundim, B. C.; Nakano, H.; Paschalidis, V.; Pekowsky, L.; Pollney, D.; Pfeiffer, H. P.; Ponce, M.; Pürrer, M.; Reifenberger, G.; Reisswig, C.; Santamaría, L.; Scheel, M. A.; Shapiro, S. L.; Shoemaker, D.; Sopuerta, C. F.; Sperhake, U.; Szilágyi, B.; Taylor, N. W.; Tichy, W.; Tsatsin, P.; Zlochower, Y.
2014-06-01
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a ‘blind injection challenge’ similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs’ angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M⊙ + 10M⊙ (50M⊙ + 50M⊙) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to ˜15% for 50M⊙ + 50M⊙ BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.
Wang, Chia-Chen; Lai, Yin-Hung; Ou, Yu-Meng; Chang, Huan-Tsung; Wang, Yi-Sheng
2016-01-01
Quantitative analysis with mass spectrometry (MS) is important but challenging. Matrix-assisted laser desorption/ionization (MALDI) coupled with time-of-flight (TOF) MS offers superior sensitivity, resolution and speed, but such techniques have numerous disadvantages that hinder quantitative analyses. This review summarizes essential obstacles to analyte quantification with MALDI-TOF MS, including the complex ionization mechanism of MALDI, sensitive characteristics of the applied electric fields and the mass-dependent detection efficiency of ion detectors. General quantitative ionization and desorption interpretations of ion production are described. Important instrument parameters and available methods of MALDI-TOF MS used for quantitative analysis are also reviewed. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644968
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
NASA Astrophysics Data System (ADS)
Ferrara, R.; Leonardi, G.; Jourdan, F.
2013-09-01
A numerical model to predict train-induced vibrations is presented. The dynamic computation considers mutual interactions in vehicle/track coupled systems by means of a finite and discrete elements method. The rail defects and the case of out-of-round wheels are considered. The dynamic interaction between the wheel-sets and the rail is accomplished by using the non-linear Hertzian model with hysteresis damping. A sensitivity analysis is done to evaluate the variables affecting more the maintenance costs. The rail-sleeper contact is assumed extended to an area-defined contact zone, rather than a single-point assumption which fits better real case studies. Experimental validations show how prediction fits well experimental data.
Polarization properties of amyloid-beta plaques in Alzheimer's disease (Conference Presentation)
NASA Astrophysics Data System (ADS)
Baumann, Bernhard; Wöhrer, Adelheid; Ricken, Gerda; Pircher, Michael; Kovacs, Gabor G.; Hitzenberger, Christoph K.
2016-03-01
In histopathological practice, birefringence is used for the identification of amyloidosis in numerous tissues. Amyloid birefringence is caused by the parallel arrangement of fibrous protein aggregates. Since neurodegenerative processes in Alzheimer's disease (AD) are also linked to the formation of amyloid-beta (Aβ) plaques, optical methods sensitive to birefringence may act as non-invasive tools for Aβ identification. At last year's Photonics West, we demonstrated polarization-sensitive optical coherence tomography (PS-OCT) imaging of ex vivo cerebral tissue of advanced stage AD patients. PS-OCT provides volumetric, structural imaging based on both backscatter contrast and tissue polarization properties. In this presentation, we report on polarization-sensitive neuroimaging along with numerical simulations of three-dimensional Aβ plaques. High speed PS-OCT imaging was performed using a spectral domain approach based on polarization maintaining fiber optics. The sample beam was interfaced to a confocal scanning microscope arrangement. Formalin-fixed tissue samples as well as thin histological sections were imaged. For comparison to the PS-OCT results, ray propagation through plaques was modeled using Jones analysis and various illumination geometries and plaque sizes. Characteristic polarization patterns were found. The results of this study may not only help to understand PS-OCT imaging of neuritic Aβ plaques but may also have implications for polarization-sensitive imaging of other fibrillary structures.
Nijs, Jo; Van Houdenhove, Boudewijn; Oostendorp, Rob A B
2010-04-01
Central sensitization plays an important role in the pathophysiology of numerous musculoskeletal pain disorders, yet it remains unclear how manual therapists can recognize this condition. Therefore, mechanism based clinical guidelines for the recognition of central sensitization in patients with musculoskeletal pain are provided. By using our current understanding of central sensitization during the clinical assessment of patients with musculoskeletal pain, manual therapists can apply the science of nociceptive and pain processing neurophysiology to the practice of manual therapy. The diagnosis/assessment of central sensitization in individual patients with musculoskeletal pain is not straightforward, however manual therapists can use information obtained from the medical diagnosis, combined with the medical history of the patient, as well as the clinical examination and the analysis of the treatment response in order to recognize central sensitization. The clinical examination used to recognize central sensitization entails the distinction between primary and secondary hyperalgesia. Copyright 2009 Elsevier Ltd. All rights reserved.
Advanced Numerical Model for Irradiated Concrete
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giorla, Alain B.
In this report, we establish a numerical model for concrete exposed to irradiation to address these three critical points. The model accounts for creep in the cement paste and its coupling with damage, temperature and relative humidity. The shift in failure mode with the loading rate is also properly represented. The numerical model for creep has been validated and calibrated against different experiments in the literature [Wittmann, 1970, Le Roy, 1995]. Results from a simplified model are shown to showcase the ability of numerical homogenization to simulate irradiation effects in concrete. In future works, the complete model will be appliedmore » to the analysis of the irradiation experiments of Elleuch et al. [1972] and Kelly et al. [1969]. This requires a careful examination of the experimental environmental conditions as in both cases certain critical information are missing, including the relative humidity history. A sensitivity analysis will be conducted to provide lower and upper bounds of the concrete expansion under irradiation, and check if the scatter in the simulated results matches the one found in experiments. The numerical and experimental results will be compared in terms of expansion and loss of mechanical stiffness and strength. Both effects should be captured accordingly by the model to validate it. Once the model has been validated on these two experiments, it can be applied to simulate concrete from nuclear power plants. To do so, the materials used in these concrete must be as well characterized as possible. The main parameters required are the mechanical properties of each constituent in the concrete (aggregates, cement paste), namely the elastic modulus, the creep properties, the tensile and compressive strength, the thermal expansion coefficient, and the drying shrinkage. These can be either measured experimentally, estimated from the initial composition in the case of cement paste, or back-calculated from mechanical tests on concrete. If some are unknown, a sensitivity analysis must be carried out to provide lower and upper bounds of the material behaviour. Finally, the model can be used as a basis to formulate a macroscopic material model for concrete subject to irradiation, which later can be used in structural analyses to estimate the structural impact of irradiation on nuclear power plants.« less
Effect of current vehicle’s interruption on traffic stability in cooperative car-following theory
NASA Astrophysics Data System (ADS)
Zhang, Geng; Liu, Hui
2017-12-01
To reveal the impact of the current vehicle’s interruption information on traffic flow, a new car-following model with consideration of the current vehicle’s interruption is proposed and the influence of the current vehicle’s interruption on traffic stability is investigated through theoretical analysis and numerical simulation. By linear analysis, the linear stability condition of the new model is obtained and the negative influence of the current vehicle’s interruption on traffic stability is shown in the headway-sensitivity space. Through nonlinear analysis, the modified Korteweg-de Vries (mKdV) equation of the new model near the critical point is derived and it can be used to describe the propagating behavior of the traffic density wave. Finally, numerical simulation confirms the analytical results, which shows that the current vehicle’s interruption information can destabilize traffic flow and should be considered in real traffic.
On numerical integration and computer implementation of viscoplastic models
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Chang, J. P.; Thompson, R. L.
1985-01-01
Due to the stringent design requirement for aerospace or nuclear structural components, considerable research interests have been generated on the development of constitutive models for representing the inelastic behavior of metals at elevated temperatures. In particular, a class of unified theories (or viscoplastic constitutive models) have been proposed to simulate material responses such as cyclic plasticity, rate sensitivity, creep deformations, strain hardening or softening, etc. This approach differs from the conventional creep and plasticity theory in that both the creep and plastic deformations are treated as unified time-dependent quantities. Although most of viscoplastic models give better material behavior representation, the associated constitutive differential equations have stiff regimes which present numerical difficulties in time-dependent analysis. In this connection, appropriate solution algorithm must be developed for viscoplastic analysis via finite element method.
NASA Technical Reports Server (NTRS)
Kendall, B. R.
1979-01-01
Theoretical and numerical analyses were made of planar, cylindrical and spherical electrode time-of-flight mass spectrometers in order to optimize their operating conditions. A numerical analysis of potential barrier gating in time-of-flight spectrometers was also made. The results were used in the design of several small mass spectrometers. These were constructed and tested in a laboratory space simulator. Detailed experimental studies of a miniature cylindrical electrode time of flight mass spectrometer and of a miniature hemispherical electrode time of flight mass spectrometer were made. The extremely high sensitivity of these instruments and their ability to operate at D region pressures with an open source make them ideal instruments for D region ion composition measurements.
Numerical Analysis on the High-Strength Concrete Beams Ultimate Behaviour
NASA Astrophysics Data System (ADS)
Smarzewski, Piotr; Stolarski, Adam
2017-10-01
Development of technologies of high-strength concrete (HSC) beams production, with the aim of creating a secure and durable material, is closely linked with the numerical models of real objects. The three-dimensional nonlinear finite element models of reinforced high-strength concrete beams with a complex geometry has been investigated in this study. The numerical analysis is performed using the ANSYS finite element package. The arc-length (A-L) parameters and the adaptive descent (AD) parameters are used with Newton-Raphson method to trace the complete load-deflection curves. Experimental and finite element modelling results are compared graphically and numerically. Comparison of these results indicates the correctness of failure criteria assumed for the high-strength concrete and the steel reinforcement. The results of numerical simulation are sensitive to the modulus of elasticity and the shear transfer coefficient for an open crack assigned to high-strength concrete. The full nonlinear load-deflection curves at mid-span of the beams, the development of strain in compressive concrete and the development of strain in tensile bar are in good agreement with the experimental results. Numerical results for smeared crack patterns are qualitatively agreeable as to the location, direction, and distribution with the test data. The model was capable of predicting the introduction and propagation of flexural and diagonal cracks. It was concluded that the finite element model captured successfully the inelastic flexural behaviour of the beams to failure.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Cortical geometry as a determinant of brain activity eigenmodes: Neural field analysis
NASA Astrophysics Data System (ADS)
Gabay, Natasha C.; Robinson, P. A.
2017-09-01
Perturbation analysis of neural field theory is used to derive eigenmodes of neural activity on a cortical hemisphere, which have previously been calculated numerically and found to be close analogs of spherical harmonics, despite heavy cortical folding. The present perturbation method treats cortical folding as a first-order perturbation from a spherical geometry. The first nine spatial eigenmodes on a population-averaged cortical hemisphere are derived and compared with previous numerical solutions. These eigenmodes contribute most to brain activity patterns such as those seen in electroencephalography and functional magnetic resonance imaging. The eigenvalues of these eigenmodes are found to agree with the previous numerical solutions to within their uncertainties. Also in agreement with the previous numerics, all eigenmodes are found to closely resemble spherical harmonics. The first seven eigenmodes exhibit a one-to-one correspondence with their numerical counterparts, with overlaps that are close to unity. The next two eigenmodes overlap the corresponding pair of numerical eigenmodes, having been rotated within the subspace spanned by that pair, likely due to second-order effects. The spatial orientations of the eigenmodes are found to be fixed by gross cortical shape rather than finer-scale cortical properties, which is consistent with the observed intersubject consistency of functional connectivity patterns. However, the eigenvalues depend more sensitively on finer-scale cortical structure, implying that the eigenfrequencies and consequent dynamical properties of functional connectivity depend more strongly on details of individual cortical folding. Overall, these results imply that well-established tools from perturbation theory and spherical harmonic analysis can be used to calculate the main properties and dynamics of low-order brain eigenmodes.
Numerical approaches to combustion modeling. Progress in Astronautics and Aeronautics. Vol. 135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oran, E.S.; Boris, J.P.
1991-01-01
Various papers on numerical approaches to combustion modeling are presented. The topics addressed include; ab initio quantum chemistry for combustion; rate coefficient calculations for combustion modeling; numerical modeling of combustion of complex hydrocarbons; combustion kinetics and sensitivity analysis computations; reduction of chemical reaction models; length scales in laminar and turbulent flames; numerical modeling of laminar diffusion flames; laminar flames in premixed gases; spectral simulations of turbulent reacting flows; vortex simulation of reacting shear flow; combustion modeling using PDF methods. Also considered are: supersonic reacting internal flow fields; studies of detonation initiation, propagation, and quenching; numerical modeling of heterogeneous detonations, deflagration-to-detonationmore » transition to reactive granular materials; toward a microscopic theory of detonations in energetic crystals; overview of spray modeling; liquid drop behavior in dense and dilute clusters; spray combustion in idealized configurations: parallel drop streams; comparisons of deterministic and stochastic computations of drop collisions in dense sprays; ignition and flame spread across solid fuels; numerical study of pulse combustor dynamics; mathematical modeling of enclosure fires; nuclear systems.« less
NASA Astrophysics Data System (ADS)
Fernandez, P.; Wang, Q.
2017-12-01
We investigate the impact of numerical discretization on the Lyapunov spectrum of separated flow simulations. The two-dimensional chaotic flow around the NACA 0012 airfoil at a low Reynolds number and large angle of attack is considered to that end. Time, space and accuracy-order refinement studies are performed to examine each of these effects separately. Numerical results show that the time discretization has a small impact on the dynamics of the system, whereas the spatial discretization can dramatically change them. Also, the finite-time Lyapunov exponents associated to unstable modes are shown to be positively skewed, and quasi-homoclinic tangencies are observed in the attractor of the system. The implications of these results on flow physics and sensitivity analysis of chaotic flows are discussed.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Detailed analysis of the Japanese version of the Rapid Dementia Screening Test, revised version.
Moriyama, Yasushi; Yoshino, Aihide; Muramatsu, Taro; Mimura, Masaru
2017-11-01
The number-transcoding task on the Japanese version of the Rapid Dementia Screening Test (RDST-J) requires mutual conversion between Arabic and Chinese numerals (209 to , 4054 to , to 681, to 2027). In this task, question and answer styles of Chinese numerals are written horizontally. We investigated the impact of changing the task so that Chinese numerals are written vertically. Subjects were 211 patients with very mild to severe Alzheimer's disease and 42 normal controls. Mini-Mental State Examination scores ranged from 26 to 12, and Clinical Dementia Rating scores ranged from 0.5 to 3. Scores of all four subtasks of the transcoding task significantly improved in the revised version compared with the original version. The sensitivity and specificity of total scores ≥9 on the RDST-J original and revised versions for discriminating between controls and subjects with Clinical Dementia Rating scores of 0.5 were 63.8% and 76.6% on the original and 60.1% and 85.8% on revised version. The revised RDST-J total score had low sensitivity and high specificity compared with the original RDST-J for discriminating subjects with Clinical Dementia Rating scores of 0.5 from controls. © 2017 Japanese Psychogeriatric Society.
Application of optimal control strategies to HIV-malaria co-infection dynamics
NASA Astrophysics Data System (ADS)
Fatmawati; Windarto; Hanif, Lathifah
2018-03-01
This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.
Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers
NASA Astrophysics Data System (ADS)
Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.
2011-12-01
Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.
Optimal dynamic pricing for deteriorating items with reference-price effects
NASA Astrophysics Data System (ADS)
Xue, Musen; Tang, Wansheng; Zhang, Jianxiong
2016-07-01
In this paper, a dynamic pricing problem for deteriorating items with the consumers' reference-price effect is studied. An optimal control model is established to maximise the total profit, where the demand not only depends on the current price, but also is sensitive to the historical price. The continuous-time dynamic optimal pricing strategy with reference-price effect is obtained through solving the optimal control model on the basis of Pontryagin's maximum principle. In addition, numerical simulations and sensitivity analysis are carried out. Finally, some managerial suggestions that firm may adopt to formulate its pricing policy are proposed.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
Numerical simulation of supersonic inlets using a three-dimensional viscous flow analysis
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Towne, C. E.
1980-01-01
A three dimensional fully viscous computer analysis was evaluated to determine its usefulness in the design of supersonic inlets. This procedure takes advantage of physical approximations to limit the high computer time and storage associated with complete Navier-Stokes solutions. Computed results are presented for a Mach 3.0 supersonic inlet with bleed and a Mach 7.4 hypersonic inlet. Good agreement was obtained between theory and data for both inlets. Results of a mesh sensitivity study are also shown.
NASA Astrophysics Data System (ADS)
Shah, Nita H.; Soni, Hardik N.; Gupta, Jyoti
2014-08-01
In a recent paper, Begum et al. (2012, International Journal of Systems Science, 43, 903-910) established pricing and replenishment policy for an inventory system with price-sensitive demand rate, time-proportional deterioration rate which follows three parameters, Weibull distribution and no shortages. In their model formulation, it is observed that the retailer's stock level reaches zero before the deterioration occurs. Consequently, the model resulted in traditional inventory model with price sensitive demand rate and no shortages. Hence, the main purpose of this note is to modify and present complete model formulation for Begum et al. (2012). The proposed model is validated by a numerical example and the sensitivity analysis of parameters is carried out.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Masoudi, A.; Newson, T. P.
2017-04-01
A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1991-01-01
The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.
Using models to manage systems subject to sustainability indicators
Hill, M.C.
2006-01-01
Mathematical and numerical models can provide insight into sustainability indicators using relevant simulated quantities, which are referred to here as predictions. To be useful, many concerns need to be considered. Four are discussed here: (a) mathematical and numerical accuracy of the model; (b) the accuracy of the data used in model development, (c) the information observations provide to aspects of the model important to predictions of interest as measured using sensitivity analysis; and (d) the existence of plausible alternative models for a given system. The four issues are illustrated using examples from conservative and transport modelling, and using conceptual arguments. Results suggest that ignoring these issues can produce misleading conclusions.
Analysis of Composite Panels Subjected to Thermo-Mechanical Loads
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1999-01-01
The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.
Polyhedral meshing in numerical analysis of conjugate heat transfer
NASA Astrophysics Data System (ADS)
Sosnowski, Marcin; Krzywanski, Jaroslaw; Grabowska, Karolina; Gnatowska, Renata
2018-06-01
Computational methods have been widely applied in conjugate heat transfer analysis. The very first and crucial step in such research is the meshing process which consists in dividing the analysed geometry into numerous small control volumes (cells). In Computational Fluid Dynamics (CFD) applications it is desirable to use the hexahedral cells as the resulting mesh is characterized by low numerical diffusion. Unfortunately generating such mesh can be a very time-consuming task and in case of complicated geometry - it may not be possible to generate cells of good quality. Therefore tetrahedral cells have been implemented into commercial pre-processors. Their advantage is the ease of its generation even in case of very complex geometry. On the other hand tetrahedrons cannot be stretched excessively without decreasing the mesh quality factor, so significantly larger number of cells has to be used in comparison to hexahedral mesh in order to achieve a reasonable accuracy. Moreover the numerical diffusion of tetrahedral elements is significantly higher. Therefore the polyhedral cells are proposed within the paper in order to combine the advantages of hexahedrons (low numerical diffusion resulting in accurate solution) and tetrahedrons (rapid semi-automatic generation) as well as to overcome the disadvantages of both the above mentioned mesh types. The major benefit of polyhedral mesh is that each individual cell has many neighbours, so gradients can be well approximated. Polyhedrons are also less sensitive to stretching than tetrahedrons which results in better mesh quality leading to improved numerical stability of the model. In addition, numerical diffusion is reduced due to mass exchange over numerous faces. This leads to a more accurate solution achieved with a lower cell count. Therefore detailed comparison of numerical modelling results concerning conjugate heat transfer using tetrahedral and polyhedral meshes is presented in the paper.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2016-04-01
An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Zhang, Y. Q.; Adebiyi, Adebimpe
1989-01-01
Progress performed on each task is described. Order of magnitude analyses related to liquid zone sensitivity and thermo-capillary flow sensitivity are covered. Progress with numerical models of the sensitivity of isothermal liquid zones is described. Progress towards a numerical model of coupled buoyancy-driven and thermo-capillary convection experiments is also described. Interaction with NASA personnel is covered. Results to date are summarized and they are discussed in terms of the predicted space station acceleration environment. Work planned for the second year is also discussed.
NASA Astrophysics Data System (ADS)
Wagener, Thorsten; Pianosi, Francesca
2016-04-01
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Sensitivity Analysis to Turbulent Combustion Models for Combustor-Turbine Interactions
NASA Astrophysics Data System (ADS)
Miki, Kenji; Moder, Jeff; Liou, Meng-Sing
2017-11-01
The recently-updated Open National CombustionCode (Open NCC) equipped with alarge-eddy simulation (LES) is applied to model the flow field inside the Energy Efficient Engine (EEE) in conjunction with sensitivity analysis to turbulent combustion models. In this study, we consider three different turbulence-combustion interaction models, the Eddy-Breakup model (EBU), the Linear-Eddy Model (LEM) and the Probability Density Function (PDF)model as well as the laminar chemistry model. Acomprehensive comparison of the flow field and the flame structure will be provided. One of our main interests isto understand how a different model predicts thermal variation on the surface of the first stage vane. Considering that these models are often used in combustor/turbine communities, this study should provide some guidelines on numerical modeling of combustor-turbine interactions.
Simulation analysis of an integrated model for dynamic cellular manufacturing system
NASA Astrophysics Data System (ADS)
Hao, Chunfeng; Luan, Shichao; Kong, Jili
2017-05-01
Application of dynamic cellular manufacturing system (DCMS) is a well-known strategy to improve manufacturing efficiency in the production environment with high variety and low volume of production. Often, neither the trade-off of inter and intra-cell material movements nor the trade-off of hiring and firing of operators are examined in details. This paper presents simulation results of an integrated mixed-integer model including sensitivity analysis for several numerical examples. The comprehensive model includes cell formation, inter and intracellular materials handling, inventory and backorder holding, operator assignment (including resource adjustment) and flexible production routing. The model considers multi-production planning with flexible resources (machines and operators) where each period has different demands. The results verify the validity and sensitivity of the proposed model using a genetic algorithm.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Optimal control analysis of Ebola disease with control strategies of quarantine and vaccination.
Ahmad, Muhammad Dure; Usman, Muhammad; Khan, Adnan; Imran, Mudassar
2016-07-13
The 2014 Ebola epidemic is the largest in history, affecting multiple countries in West Africa. Some isolated cases were also observed in other regions of the world. In this paper, we introduce a deterministic SEIR type model with additional hospitalization, quarantine and vaccination components in order to understand the disease dynamics. Optimal control strategies, both in the case of hospitalization (with and without quarantine) and vaccination are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations. Further, with the help of uncertainty and sensitivity analysis we also have identified the most sensitive parameters which effectively contribute to change the disease dynamics. We have performed mathematical analysis with numerical simulations and optimal control strategies on Ebola virus models. We used dynamical system tools with numerical simulations and optimal control strategies on our Ebola virus models. The original model, which allowed transmission of Ebola virus via human contact, was extended to include imperfect vaccination and quarantine. After the qualitative analysis of all three forms of Ebola model, numerical techniques, using MATLAB as a platform, were formulated and analyzed in detail. Our simulation results support the claims made in the qualitative section. Our model incorporates an important component of individuals with high risk level with exposure to disease, such as front line health care workers, family members of EVD patients and Individuals involved in burial of deceased EVD patients, rather than the general population in the affected areas. Our analysis suggests that in order for R 0 (i.e., the basic reproduction number) to be less than one, which is the basic requirement for the disease elimination, the transmission rate of isolated individuals should be less than one-fourth of that for non-isolated ones. Our analysis also predicts, we need high levels of medication and hospitalization at the beginning of an epidemic. Further, optimal control analysis of the model suggests the control strategies that may be adopted by public health authorities in order to reduce the impact of epidemics like Ebola.
Numerical analysis of hypersonic turbulent film cooling flows
NASA Technical Reports Server (NTRS)
Chen, Y. S.; Chen, C. P.; Wei, H.
1992-01-01
As a building block, numerical capabilities for predicting heat flux and turbulent flowfields of hypersonic vehicles require extensive model validations. Computational procedures for calculating turbulent flows and heat fluxes for supersonic film cooling with parallel slot injections are described in this study. Two injectant mass flow rates with matched and unmatched pressure conditions using the database of Holden et al. (1990) are considered. To avoid uncertainties associated with the boundary conditions in testing turbulence models, detailed three-dimensional flowfields of the injection nozzle were calculated. Two computational fluid dynamics codes, GASP and FDNS, with the algebraic Baldwin-Lomax and k-epsilon models with compressibility corrections were used. It was found that the B-L model which resolves near-wall viscous sublayer is very sensitive to the inlet boundary conditions at the nozzle exit face. The k-epsilon models with improved wall functions are less sensitive to the inlet boundary conditions. The testings show that compressibility corrections are necessary for the k-epsilon model to realistically predict the heat fluxes of the hypersonic film cooling problems.
NASA Astrophysics Data System (ADS)
Hong, Sinpyo; Lee, Inwon; Park, Seong Hyeon; Lee, Cheolmin; Chun, Ho-Hwan; Lim, Hee Chang
2015-09-01
An experimental study of the effect of mooring systems on the dynamics of a SPAR buoy-type floating offshore wind turbine is presented. The effects of the Center of Gravity (COG), mooring line spring constant, and fair-lead location on the turbine's motion in response to regular waves are investigated. Experimental results show that for a typical mooring system of a SPAR buoy-type Floating Offshore Wind Turbine (FOWT), the effect of mooring systems on the dynamics of the turbine can be considered negligible. However, the pitch decreases notably as the COG increases. The COG and spring constant of the mooring line have a negligible effect on the fairlead displacement. Numerical simulation and sensitivity analysis show that the wind turbine motion and its sensitivity to changes in the mooring system and COG are very large near resonant frequencies. The test results can be used to validate numerical simulation tools for FOWTs.
Optimal design of solidification processes
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Tortorelli, Daniel A.
1991-01-01
An optimal design algorithm is presented for the analysis of general solidification processes, and is demonstrated for the growth of GaAs crystals in a Bridgman furnace. The system is optimal in the sense that the prespecified temperature distribution in the solidifying materials is obtained to maximize product quality. The optimization uses traditional numerical programming techniques which require the evaluation of cost and constraint functions and their sensitivities. The finite element method is incorporated to analyze the crystal solidification problem, evaluate the cost and constraint functions, and compute the sensitivities. These techniques are demonstrated in the crystal growth application by determining an optimal furnace wall temperature distribution to obtain the desired temperature profile in the crystal, and hence to maximize the crystal's quality. Several numerical optimization algorithms are studied to determine the proper convergence criteria, effective 1-D search strategies, appropriate forms of the cost and constraint functions, etc. In particular, we incorporate the conjugate gradient and quasi-Newton methods for unconstrained problems. The efficiency and effectiveness of each algorithm is presented in the example problem.
Uncertainty in Damage Detection, Dynamic Propagation and Just-in-Time Networks
2015-08-03
estimated parameter uncertainty in dynamic data sets; high order compact finite difference schemes for Helmholtz equations with discontinuous wave numbers...delay differential equations with a Gamma distributed delay. We found that with the same population size the histogram plots for the solution to the...schemes for Helmholtz equations with discontinuous wave numbers across interfaces. • We carried out numerical sensitivity analysis with respect to
Cylindrical optical resonators: fundamental properties and bio-sensing characteristics
NASA Astrophysics Data System (ADS)
Khozeymeh, Foroogh; Razaghi, Mohammad
2018-04-01
In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Generation of helical gears with new surfaces topology by application of CNC machines
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Chen, N. X.; Hsiao, C. L.; Handschuh, Robert F.
1993-01-01
Analysis of helical involute gears by tooth contact analysis shows that such gears are very sensitive to angular misalignment that leads to edge contact and the potential for high vibration. A new topology of tooth surfaces of helical gears that enables a favorable bearing contact and a reduced level of vibration is described. Methods for grinding of the helical gears with the new topology are proposed. A TCA (tooth contact analysis) program for simulation of meshing and contact of helical gears with the new topology has been developed. Numerical examples that illustrate the proposed ideas are discussed.
Generation of helical gears with new surfaces, topology by application of CNC machines
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Chen, N. X.; Hsiao, C. L.; Handschuh, R. F.
1993-01-01
Analysis of helical involute gears by tooth contact analysis shows that such gears are very sensitive to angular misalignment that leads to edge contact and the potential for high vibration. A new topology of tooth surfaces of helical gears that enables a favorable bearing contact and a reduced level of vibration is described. Methods for grinding of the helical gears with the new topology are proposed. A TCA (tooth contact analysis) program for simulation of meshing and contact of helical gears with the new topology has been developed. Numerical examples that illustrate the proposed ideas are discussed.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Nanotip analysis for dielectrophoretic concentration of nanosized viral particles.
Yeo, Woon-Hong; Lee, Hyun-Boo; Kim, Jong-Hoon; Lee, Kyong-Hoon; Chung, Jae-Hyun
2013-05-10
Rapid and sensitive detection of low-abundance viral particles is strongly demanded in health care, environmental control, military defense, and homeland security. Current detection methods, however, lack either assay speed or sensitivity, mainly due to the nanosized viral particles. In this paper, we compare a dendritic, multi-terminal nanotip ('dendritic nanotip') with a single terminal nanotip ('single nanotip') for dielectrophoretic (DEP) concentration of viral particles. The numerical computation studies the concentration efficiency of viral particles ranging from 25 to 100 nm in radius for both nanotips. With DEP and Brownian motion considered, when the particle radius decreases by two times, the concentration time for both nanotips increases by 4-5 times. In the computational study, a dendritic nanotip shows about 1.5 times faster concentration than a single nanotip for the viral particles because the dendritic structure increases the DEP-effective area to overcome the Brownian motion. For the qualitative support of the numerical results, the comparison experiment of a dendritic nanotip and a single nanotip is conducted. Under 1 min of concentration time, a dendritic nanotip shows a higher sensitivity than a single nanotip. When the concentration time is 5 min, the sensitivity of a dendritic nanotip for T7 phage is 10(4) particles ml(-1). The dendritic nanotip-based concentrator has the potential for rapid identification of viral particles.
Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.
2016-03-22
Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.
Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less
Nanotip analysis for dielectrophoretic concentration of nanosized viral particles
NASA Astrophysics Data System (ADS)
Yeo, Woon-Hong; Lee, Hyun-Boo; Kim, Jong-Hoon; Lee, Kyong-Hoon; Chung, Jae-Hyun
2013-05-01
Rapid and sensitive detection of low-abundance viral particles is strongly demanded in health care, environmental control, military defense, and homeland security. Current detection methods, however, lack either assay speed or sensitivity, mainly due to the nanosized viral particles. In this paper, we compare a dendritic, multi-terminal nanotip (‘dendritic nanotip’) with a single terminal nanotip (‘single nanotip’) for dielectrophoretic (DEP) concentration of viral particles. The numerical computation studies the concentration efficiency of viral particles ranging from 25 to 100 nm in radius for both nanotips. With DEP and Brownian motion considered, when the particle radius decreases by two times, the concentration time for both nanotips increases by 4-5 times. In the computational study, a dendritic nanotip shows about 1.5 times faster concentration than a single nanotip for the viral particles because the dendritic structure increases the DEP-effective area to overcome the Brownian motion. For the qualitative support of the numerical results, the comparison experiment of a dendritic nanotip and a single nanotip is conducted. Under 1 min of concentration time, a dendritic nanotip shows a higher sensitivity than a single nanotip. When the concentration time is 5 min, the sensitivity of a dendritic nanotip for T7 phage is 104 particles ml-1. The dendritic nanotip-based concentrator has the potential for rapid identification of viral particles.
[Numerical simulation and operation optimization of biological filter].
Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing
2014-12-01
BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.
Optimal frequency-response sensitivity of compressible flow over roughness elements
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.
2017-04-01
Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.
Bashyam, Ashvin; Li, Matthew; Cima, Michael J
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bashyam, Ashvin; Li, Matthew; Cima, Michael J.
2018-07-01
Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.
Field-sensitivity To Rheological Parameters
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2017-11-01
We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.
Numerical Computation of Sensitivities and the Adjoint Approach
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael
1997-01-01
We discuss the numerical computation of sensitivities via the adjoint approach in optimization problems governed by differential equations. We focus on the adjoint problem in its weak form. We show how one can avoid some of the problems with the adjoint approach, such as deriving suitable boundary conditions for the adjoint equation. We discuss the convergence of numerical approximations of the costate computed via the weak form of the adjoint problem and show the significance for the discrete adjoint problem.
NASA Astrophysics Data System (ADS)
Heinze, C.; Schwenk, C.; Rethmeier, M.; Caron, J.
2011-06-01
The usage of continuous cooling transformation (CCT) diagrams in numerical welding simulations is state of the art. Nevertheless, specifications provide limits in chemical composition of materials which result in different CCT behavior and CCT diagrams, respectively. Therefore, it is necessary to analyze the influence of variations in CCT diagrams on the developing residual stresses. In the present paper, four CCT diagrams and their effect on numerical calculation of residual stresses are investigated for the widely used structural steel S355J2 + N welded by the gas metal arc welding (GMAW) process. Rather than performing an arbitrary adjustment of CCT behavior, four justifiable data sets were used as input to the numerical calculation: data available in the Sysweld database, experimental data acquired through Gleeble dilatometry tests, and TTT/CCT predictions calculated from the JMatPro and Edison Welding Institute (EWI) Virtual Joining Portal software. The performed numerical analyses resulted in noticeable deviations in residual stresses considering the different CCT diagrams. Furthermore, possibilities to improve the prediction of distortions and residual stress based on CCT behavior are discussed.
Local amplification of storm surge by Super Typhoon Haiyan in Leyte Gulf
Mori, Nobuhito; Kato, Masaya; Kim, Sooyoul; Mase, Hajime; Shibutani, Yoko; Takemi, Tetsuya; Tsuboki, Kazuhisa; Yasuda, Tomohiro
2014-01-01
Typhoon Haiyan, which struck the Philippines in November 2013, was an extremely intense tropical cyclone that had a catastrophic impact. The minimum central pressure of Typhoon Haiyan was 895 hPa, making it the strongest typhoon to make landfall on a major island in the western North Pacific Ocean. The characteristics of Typhoon Haiyan and its related storm surge are estimated by numerical experiments using numerical weather prediction models and a storm surge model. Based on the analysis of best hindcast results, the storm surge level was 5–6 m and local amplification of water surface elevation due to seiche was found to be significant inside Leyte Gulf. The numerical experiments show the coherent structure of the storm surge profile due to the specific bathymetry of Leyte Gulf and the Philippines Trench as a major contributor to the disaster in Tacloban. The numerical results also indicated the sensitivity of storm surge forecast. PMID:25821268
An extended continuum model considering optimal velocity change with memory and numerical tests
NASA Astrophysics Data System (ADS)
Qingtao, Zhai; Hongxia, Ge; Rongjun, Cheng
2018-01-01
In this paper, an extended continuum model of traffic flow is proposed with the consideration of optimal velocity changes with memory. The new model's stability condition and KdV-Burgers equation considering the optimal velocities change with memory are deduced through linear stability theory and nonlinear analysis, respectively. Numerical simulation is carried out to study the extended continuum model, which explores how optimal velocity changes with memory affected velocity, density and energy consumption. Numerical results show that when considering the effects of optimal velocity changes with memory, the traffic jams can be suppressed efficiently. Both the memory step and sensitivity parameters of optimal velocity changes with memory will enhance the stability of traffic flow efficiently. Furthermore, numerical results demonstrates that the effect of optimal velocity changes with memory can avoid the disadvantage of historical information, which increases the stability of traffic flow on road, and so it improve the traffic flow stability and minimize cars' energy consumptions.
Konik, R. M.; Palmai, T.; Takacs, G.; ...
2015-08-24
We study the SU(2) k Wess-Zumino-Novikov-Witten (WZNW) theory perturbed by the trace of the primary field in the adjoint representation, a theory governing the low-energy behaviour of a class of strongly correlated electronic systems. While the model is non-integrable, its dynamics can be investigated using the numerical technique of the truncated conformal spectrum approach combined with numerical and analytical renormalization groups (TCSA+RG). The numerical results so obtained provide support for a semiclassical analysis valid at k » 1. Namely, we find that the low energy behavior is sensitive to the sign of the coupling constant, λ. Moreover for λ >more » 0 this behavior depends on whether k is even or odd. With k even, we find definitive evidence that the model at low energies is equivalent to the massive O(3) sigma model. For k odd, the numerical evidence is more equivocal, but we find indications that the low energy effective theory is critical.« less
Anselmi, Nicola; Salucci, Marco; Rocca, Paolo; Massa, Andrea
2016-01-01
The sensitivity to both calibration errors and mutual coupling effects of the power pattern radiated by a linear array is addressed. Starting from the knowledge of the nominal excitations of the array elements and the maximum uncertainty on their amplitudes, the bounds of the pattern deviations from the ideal one are analytically derived by exploiting the Circular Interval Analysis (CIA). A set of representative numerical results is reported and discussed to assess the effectiveness and the reliability of the proposed approach also in comparison with state-of-the-art methods and full-wave simulations. PMID:27258274
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Sombardier, Audrey; Dufour, Marie-Cécile; Blancard, Dominique; Corio-Costet, Marie-France
2010-01-01
Management of strawberry powdery mildew, Podopshaera aphanis (Wallr.), requires numerous fungicide treatments. Limiting epidemics is heavily dependent on sterol demethylation inhibitors (DMIs) such as myclobutanil or penconazole. Recently, a noticeable reduction in the efficacy of these triazole fungicides was reported by strawberry growers in France. The goal of this study was to investigate the state of DMI sensitivity of French P. aphanis and provide tools for improved pest management. Using leaf disc sporulation assays, sensitivity to myclobutanil and penconazole of 23 isolates of P. aphanis was monitored. Myclobutanil EC(50) ranged from less than 0.1 to 14.67 mg L(-1) and for penconazole from 0.04 to 4.2 mg L(-1). A cross-analysis and a Venn diagram showed that there was reduced sensitivity and a positive correlation between the less sensitive myclobutanil and penconazole isolates; 73.9% of isolates were less sensitive to a DMI and 47.8% exhibited less sensitivity to both fungicides. The results show that sensitivity to myclobutanil and, to a lesser extent, penconazole has become less efficient in strawberry powdery mildew in France. Therefore, urgent action is required in order to document its appearance and optimise methods of control.
NASA Astrophysics Data System (ADS)
Mettot, Clément; Sipp, Denis; Bézard, Hervé
2014-04-01
This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in a black-box manner to extract the global modes and sensitivity maps.
Szucs, Dénes; Soltész, Fruzsina
2010-05-01
We dissociated ERP markers of semantic (numerical distance) vs. syntactic (place value) incongruence in the domain of arithmetic. Participants verified additions with four-digit numbers. Semantic incongruencies elicited the N400 ERP effect. A centro-parietal (putative P600) effect to place value violations was not related to arithmetic syntax. Rather, this effect was an enlarged P3b reflecting different surprise values of place value vs. non-place value violations. This potential confound should be considered in numerical cognition experiments. The latency of the N400 and P3a effects were differentially affected by place value analysis. The amplitude of the P3a and that of a fronto-central positive effect (FP600) was sensitive to place value analysis and digit content. Results suggest that ERPs can index the syntactical analysis of multi-digit numbers. Both ERP and behavioral data confirmed that multi-digit numbers were decomposed into their constituent digits, rather than evaluated holistically. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeen S.
1987-01-01
The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity.
NASA Astrophysics Data System (ADS)
Mottyll, S.; Skoda, R.
2015-12-01
A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Wei, Zhenglun Alan; Trusty, Phillip M; Tree, Mike; Haggerty, Christopher M; Tang, Elaine; Fogel, Mark; Yoganathan, Ajit P
2017-01-04
Cardiovascular simulations have great potential as a clinical tool for planning and evaluating patient-specific treatment strategies for those suffering from congenital heart diseases, specifically Fontan patients. However, several bottlenecks have delayed wider deployment of the simulations for clinical use; the main obstacle is simulation cost. Currently, time-averaged clinical flow measurements are utilized as numerical boundary conditions (BCs) in order to reduce the computational power and time needed to offer surgical planning within a clinical time frame. Nevertheless, pulsatile blood flow is observed in vivo, and its significant impact on numerical simulations has been demonstrated. Therefore, it is imperative to carry out a comprehensive study analyzing the sensitivity of using time-averaged BCs. In this study, sensitivity is evaluated based on the discrepancies between hemodynamic metrics calculated using time-averaged and pulsatile BCs; smaller discrepancies indicate less sensitivity. The current study incorporates a comparison between 3D patient-specific CFD simulations using both the time-averaged and pulsatile BCs for 101 Fontan patients. The sensitivity analysis involves two clinically important hemodynamic metrics: hepatic flow distribution (HFD) and indexed power loss (iPL). Paired demographic group comparisons revealed that HFD sensitivity is significantly different between single and bilateral superior vena cava cohorts but no other demographic discrepancies were observed for HFD or iPL. Multivariate regression analyses show that the best predictors for sensitivity involve flow pulsatilities, time-averaged flow rates, and geometric characteristics of the Fontan connection. These predictors provide patient-specific guidelines to determine the effectiveness of analyzing patient-specific surgical options with time-averaged BCs within a clinical time frame. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pumping tests in non-uniform aquifers - the linear strip case
Butler, J.J.; Liu, W.Z.
1991-01-01
Many pumping tests are performed in geologic settings that can be conceptualized as a linear infinite strip of one material embedded in a matrix of differing flow properties. A semi-analytical solution is presented to aid the analysis of drawdown data obtained from pumping tests performed in settings that can be represented by such a conceptual model. Integral transform techniques are employed to obtain a solution in transform space that can be numerically inverted to real space. Examination of the numerically transformed solution reveals several interesting features of flow in this configuration. If the transmissivity of the strip is much higher than that of the matrix, linear and bilinear flow are the primary flow regimes during a pumping test. If the contrast between matrix and strip properties is not as extreme, then radial flow should be the primary flow mechanism. Sensitivity analysis is employed to develop insight into the controls on drawdown in this conceptual model and to demonstrate the importance of temporal and spatial placement of observations. Changes in drawdown are sensitive to the transmissivity of the strip for a limited time duration. After that time, only the total drawdown remains a function of strip transmissivity. In the case of storativity, both the total drawdown and changes in drawdown are sensitive to the storativity of the strip for a time of quite limited duration. After that time, essentially no information can be gained about the storage properties of the strip from drawdown data. An example analysis is performed using data previously presented in the literature to demonstrate the viability of the semi-analytical solution and to illustrate a general procedure for analysis of drawdown data in complex geologic settings. This example reinforces the importance of observation well placement and the time of data collection in constraining parameter correlation, a major source of the uncertainty that arises in the parameter estimation procedure. ?? 1991.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng
2014-06-20
We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.
Vortex breakdown incipience: Theoretical considerations
NASA Technical Reports Server (NTRS)
Berger, Stanley A.; Erlebacher, Gordon
1992-01-01
The sensitivity of the onset and the location of vortex breakdowns in concentrated vortex cores, and the pronounced tendency of the breakdowns to migrate upstream have been characteristic observations of experimental investigations; they have also been features of numerical simulations and led to questions about the validity of these simulations. This behavior seems to be inconsistent with the strong time-like axial evolution of the flow, as expressed explicitly, for example, by the quasi-cylindrical approximate equations for this flow. An order-of-magnitude analysis of the equations of motion near breakdown leads to a modified set of governing equations, analysis of which demonstrates that the interplay between radial inertial, pressure, and viscous forces gives an elliptic character to these concentrated swirling flows. Analytical, asymptotic, and numerical solutions of a simplified non-linear equation are presented; these qualitatively exhibit the features of vortex onset and location noted above.
NASA Astrophysics Data System (ADS)
Hu, Dianyin; Gao, Ye; Meng, Fanchao; Song, Jun; Wang, Rongqiao
2018-04-01
Combining experiments and finite element analysis (FEA), a systematic study was performed to analyze the microstructural evolution and stress states of shot-peened GH4169 superalloy over a variety of peening intensities and coverages. A dislocation density evolution model was integrated into the representative volume FEA model to quantitatively predict microstructural evolution in the surface layers and compared with experimental results. It was found that surface roughness and through-depth residual stress profile are more sensitive to shot-peening intensity compared to coverage due to the high kinetic energy involved. Moreover, a surface nanocrystallization layer was discovered in the top surface region of GH4169 for all shot-peening conditions. However, the grain refinement was more intensified under high shot-peening coverage, under which enough time was permitted for grain refinement. The grain size gradient predicted by the numerical framework showed good agreement with experimental observations.
A study on the sensitivity of self-powered neutron detectors (SPNDs)
NASA Astrophysics Data System (ADS)
Lee, Wanno; Cho, Gyuseong; Kim, Kwanghyun; Kim, Hee Joon; choi, Yuseon; Park, Moon Chu; Kim, Soongpyung
2001-08-01
Self-powered neutron detectors (SPNDs) are widely used in reactors to monitor neutron flux, while they have several advantages such as small size, and relatively simple electronics required in conjunction with those usages, they have some intrinsic problems of the low level of output current-a slow response time and the rapid change of sensitivity-that make it difficult to use for a long term. Monte Carlo simulation was used to calculate the escape probability as a function of the birth position of emitted beta particle for geometry of rhodium-based SPNDs. A simple numerical method calculated the initial generation rate of beta particles and the change of generation rate due to rhodium burnup. Using results of the simulation and the simple numerical method, the burnup profile of rhodium number density and the neutron sensitivity were calculated as a function of burnup time in reactors. This method was verified by the comparison of this and other papers, and data of YGN3.4 (Young Gwang Nuclear plant 3, 4) about the initial sensitivity. In addition, for improvement of some properties of rhodium-based SPNDs, which are currently used, a modified geometry is proposed. The proposed geometry, which is tube-type, is able to increase the initial sensitivity due to increase of the escape probability. The escape probability was calculated by changing the thickness of the insulator and compared solid-type with tube-type about each insulator thickness. The method used here can be applied to the analysis and design of other types of SPNDs.
Fiber-reinforced materials: finite elements for the treatment of the inextensibility constraint
NASA Astrophysics Data System (ADS)
Auricchio, Ferdinando; Scalet, Giulia; Wriggers, Peter
2017-12-01
The present paper proposes a numerical framework for the analysis of problems involving fiber-reinforced anisotropic materials. Specifically, isotropic linear elastic solids, reinforced by a single family of inextensible fibers, are considered. The kinematic constraint equation of inextensibility in the fiber direction leads to the presence of an undetermined fiber stress in the constitutive equations. To avoid locking-phenomena in the numerical solution due to the presence of the constraint, mixed finite elements based on the Lagrange multiplier, perturbed Lagrangian, and penalty method are proposed. Several boundary-value problems under plane strain conditions are solved and numerical results are compared to analytical solutions, whenever the derivation is possible. The performed simulations allow to assess the performance of the proposed finite elements and to discuss several features of the developed formulations concerning the effective approximation for the displacement and fiber stress fields, mesh convergence, and sensitivity to penalty parameters.
Hydrostatic Pressure Sensing with High Birefringence Photonic Crystal Fibers
Fávero, Fernando C.; Quintero, Sully M. M.; Martelli, Cicero; Braga, Arthur M.B.; Silva, Vinícius V.; Carvalho, Isabel C. S.; Llerena, Roberth W. A.; Valente, Luiz C. G.
2010-01-01
The effect of hydrostatic pressure on the waveguiding properties of high birefringence photonic crystal fibers (HiBi PCF) is evaluated both numerically and experimentally. A fiber design presenting form birefringence induced by two enlarged holes in the innermost ring defining the fiber core is investigated. Numerical results show that modal sensitivity to the applied pressure depends on the diameters of the holes, and can be tailored by independently varying the sizes of the large or small holes. Numerical and experimental results are compared showing excellent agreement. A hydrostatic pressure sensor is proposed and demonstrated using an in-fiber modal interferometer where the two orthogonally polarized modes of a HiBi PCF generate fringes over the optical spectrum of a broad band source. From the analysis of experimental results, it is concluded that, in principle, an operating limit of 92 MPa in pressure could be achieved with 0.0003% of full scale resolution. PMID:22163435
Sophocleous, M.A.
1991-01-01
The hypothesis is explored that groundwater-level rises in the Great Bend Prairie aquifer of Kansas are caused not only by water percolating downward through the soil but also by pressure pulses from stream flooding that propagate in a translatory motion through numerous high hydraulic diffusivity buried channels crossing the Great Bend Prairie aquifer in an approximately west to east direction. To validate this hypothesis, two transects of wells in a north-south and east-west orientation crossing and alongside some paleochannels in the area were instrumented with water-level-recording devices; streamflow data from all area streams were obtained from available stream-gaging stations. A theoretical approach was also developed to conceptualize numerically the stream-aquifer processes. The field data and numerical simulations provided support for the hypothesis. Thus, observation wells located along the shoulders or in between the inferred paleochannels show little or no fluctuations and no correlations with streamflow, whereas wells located along paleochannels show high water-level fluctuations and good correlation with the streamflows of the stream connected to the observation site by means of the paleochannels. The stream-aquifer numerical simulation results demonstrate that the larger the hydraulic diffusivity of the aquifer, the larger the extent of pressure pulse propagation and the faster the propagation speed. The conceptual simulation results indicate that long-distance propagation of stream floodwaves (of the order of tens of kilometers) through the Great Bend aquifer is indeed feasible with plausible stream and aquifer parameters. The sensitivity analysis results indicate that the extent and speed of pulse propagation is more sensitive to variations of stream roughness (Manning's coefficient) and stream channel slope than to any aquifer parameter. ?? 1991.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Sampling and sensitivity analyses tools (SaSAT) for computational modelling
Hoare, Alexander; Regan, David G; Wilson, David P
2008-01-01
SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361
Comprehensive analysis of transport aircraft flight performance
NASA Astrophysics Data System (ADS)
Filippone, Antonio
2008-04-01
This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Effects of gas temperature on nozzle damping experiments on cold-flow rocket motors
NASA Astrophysics Data System (ADS)
Sun, Bing-bing; Li, Shi-peng; Su, Wan-xing; Li, Jun-wei; Wang, Ning-fei
2016-09-01
In order to explore the impact of gas temperature on the nozzle damping characteristics of solid rocket motor, numerical simulations were carried out by an experimental motor in Naval Ordnance Test Station of China Lake in California. Using the pulse decay method, different cases were numerically studied via Fluent along with UDF (User Defined Functions). Firstly, mesh sensitivity analysis and monitor position-independent analysis were carried out for the computer code validation. Then, the numerical method was further validated by comparing the calculated results and experimental data. Finally, the effects of gas temperature on the nozzle damping characteristics were studied in this paper. The results indicated that the gas temperature had cooperative effects on the nozzle damping and there had great differences between cold flow and hot fire test. By discussion and analysis, it was found that the changing of mainstream velocity and the natural acoustic frequency resulted from gas temperature were the key factors that affected the nozzle damping, while the alteration of the mean pressure had little effect. Thus, the high pressure condition could be replaced by low pressure to reduce the difficulty of the test. Finally, the relation of the coefficients "alpha" between the cold flow and hot fire was got.
Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Atkins, Harold L.; Pampell, Alyssa
2011-01-01
A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.
Map-invariant spectral analysis for the identification of DNA periodicities
2012-01-01
Many signal processing based methods for finding hidden periodicities in DNA sequences have primarily focused on assigning numerical values to the symbolic DNA sequence and then applying spectral analysis tools such as the short-time discrete Fourier transform (ST-DFT) to locate these repeats. The key results pertaining to this approach are however obtained using a very specific symbolic to numerical map, namely the so-called Voss representation. An important research problem is to therefore quantify the sensitivity of these results to the choice of the symbolic to numerical map. In this article, a novel algebraic approach to the periodicity detection problem is presented and provides a natural framework for studying the role of the symbolic to numerical map in finding these repeats. More specifically, we derive a new matrix-based expression of the DNA spectrum that comprises most of the widely used mappings in the literature as special cases, shows that the DNA spectrum is in fact invariable under all these mappings, and generates a necessary and sufficient condition for the invariance of the DNA spectrum to the symbolic to numerical map. Furthermore, the new algebraic framework decomposes the periodicity detection problem into several fundamental building blocks that are totally independent of each other. Sophisticated digital filters and/or alternate fast data transforms such as the discrete cosine and sine transforms can therefore be always incorporated in the periodicity detection scheme regardless of the choice of the symbolic to numerical map. Although the newly proposed framework is matrix based, identification of these periodicities can be achieved at a low computational cost. PMID:23067324
Prediction of coefficients of thermal expansion for unidirectional composites
NASA Technical Reports Server (NTRS)
Bowles, David E.; Tompkins, Stephen S.
1989-01-01
Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.
Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model
NASA Astrophysics Data System (ADS)
Zhu, Wen-Xing; Zhang, H. M.
2018-04-01
We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.
Yang, Zemao; Lu, Ruike; Dai, Zhigang; Yan, An; Tang, Qing; Cheng, Chaohua; Xu, Ying; Yang, Wenting; Su, Jianguang
2017-01-01
High salinity is a major environmental stressor for crops. To understand the regulatory mechanisms underlying salt tolerance, we conducted a comparative transcriptome analysis between salt-tolerant and salt-sensitive jute (Corchorus spp.) genotypes in leaf and root tissues under salt stress and control conditions. In total, 68,961 unigenes were identified. Additionally, 11,100 unigenes (including 385 transcription factors (TFs)) exhibited significant differential expression in salt-tolerant or salt-sensitive genotypes. Numerous common and unique differentially expressed unigenes (DEGs) between the two genotypes were discovered. Fewer DEGs were observed in salt-tolerant jute genotypes whether in root or leaf tissues. These DEGs were involved in various pathways, such as ABA signaling, amino acid metabolism, etc. Among the enriched pathways, plant hormone signal transduction (ko04075) and cysteine/methionine metabolism (ko00270) were the most notable. Eight common DEGs across both tissues and genotypes with similar expression profiles were part of the PYL-ABA-PP2C (pyrabactin resistant-like/regulatory components of ABA receptors-abscisic acid-protein phosphatase 2C). The methionine metabolism pathway was only enriched in salt-tolerant jute root tissue. Twenty-three DEGs were involved in methionine metabolism. Overall, numerous common and unique salt-stress response DEGs and pathways between salt-tolerant and salt-sensitive jute have been discovered, which will provide valuable information regarding salt-stress response mechanisms and help improve salt-resistance molecular breeding in jute. PMID:28927022
Novel Array-Based Target Identification for Synergistic Sensitization of Breast Cancer to Herceptin
2010-05-01
Tatsuya Azum, Eileen Adamson, Ryan Alipio, Becky Pio, Frank Jones, Dan Mercola. Chip- on- chip analysis of mechanism of action of HER2 inhibition in...Munawar, Kutbuddin S. Doctor, Michael Birrer, Michael McClelland, Eileen Adamson, Dan Mercola. Egr1 regulates the coordinated expression of numerous...Kemal Korkmaz, Mashide Ohmichi, Eileen Adamson, Michael McClelland, Dan Mercola. Identification of genes bound and regulated by ATF2/c-Jun
Pricing policy for declining demand using item preservation technology.
Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav
2016-01-01
We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.
Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng
2017-03-21
Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.
NASA Astrophysics Data System (ADS)
Judson, Richard S.; Rabitz, Herschel
1987-04-01
The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.
Self-consistent adjoint analysis for topology optimization of electromagnetic waves
NASA Astrophysics Data System (ADS)
Deng, Yongbo; Korvink, Jan G.
2018-05-01
In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.
NASA Astrophysics Data System (ADS)
Fan, Shuping; Liu, Baohua; Ye, Minyou; Luo, Jiarong
1992-12-01
The idea of improving plateau with ZnO 'varistor' (voltage sensitive resistor) is presented. The result of tailoring V(sub L) and I(sub P) experiment on HT-6M tokamak is introduced. An improved tens millisecond plateau was achieved ((Delta) V(sub L)/V(sub L) less than 5%, (Delta)I(sub p)/I(sub p) less than 5%, (Delta)N(sub e)/N(sub e) less than 10%). Obviously, it is of great importance for many diagnostic measurements and further physics experiments to have the constant distribution of temperature and density. A simplified analysis of the actual poloidal circuit of HT-6M is given. The numerical simulation and the result of experiment are compared. The operating principle of the varistor and its application on iron core transformer tokamak in plateau and rising phase are mentioned.
Polarization-independent beam focusing by high-contrast grating reflectors
NASA Astrophysics Data System (ADS)
Su, Wei; Zheng, Gaige; Jiang, Liyong; Li, Xiangyin
2014-08-01
A kind of high-contrast grating (HCG) reflector for beam focusing has been proposed. We design a planar grating structure with a parabolic surface and numerical simulations using a finite different time domain (FDTD) method to verify that the structure has the capability of focusing both transverse-magnetic (TM) and transverse-electric (TE) polarized lights. Finally, we expand the design structure into a three-dimensional (3D) case. Numerical results demonstrate that the power intensities at the focal point are all greater than 8.5 dB compared with incident intensity, which means the structure has a better focusing effect. Further analysis of incident wavelength sensitivity (1.55, 1.79 and 2 μm) reveals that the proposed structure has a wide range of working wavelength.
A mathematical model for CTL effect on a latently infected cell inclusive HIV dynamics and treatment
NASA Astrophysics Data System (ADS)
Tarfulea, N. E.
2017-10-01
This paper investigates theoretically and numerically the effect of immune effectors, such as the cytotoxic lymphocyte (CTL), in modeling HIV pathogenesis (via a newly developed mathematical model); our results suggest the significant impact of the immune response on the control of the virus during primary infection. Qualitative aspects (including positivity, boundedness, stability, uncertainty, and sensitivity analysis) are addressed. Additionally, by introducing drug therapy, we analyze numerically the model to assess the effect of treatment consisting of a combination of several antiretroviral drugs. Our results show that the inclusion of the CTL compartment produces a higher rebound for an individual's healthy helper T-cell compartment than drug therapy alone. Furthermore, we quantitatively characterize successful drugs or drug combination scenarios.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
Open pit mining profit maximization considering selling stage and waste rehabilitation cost
NASA Astrophysics Data System (ADS)
Muttaqin, B. I. A.; Rosyidi, C. N.
2017-11-01
In open pit mining activities, determination of the cut-off grade becomes crucial for the company since the cut-off grade affects how much profit will be earned for the mining company. In this study, we developed a cut-off grade determination mode for the open pit mining industry considering the cost of mining, waste removal (rehabilitation) cost, processing cost, fixed cost, and selling stage cost. The main goal of this study is to develop a model of cut-off grade determination to get the maximum total profit. Secondly, this study is also developed to observe the model of sensitivity based on changes in the cost components. The optimization results show that the models can help mining company managers to determine the optimal cut-off grade and also estimate how much profit that can be earned by the mining company. To illustrate the application of the models, a numerical example and a set of sensitivity analysis are presented. From the results of sensitivity analysis, we conclude that the changes in the sales price greatly affects the optimal cut-off value and the total profit.
Effects of CO addition on the characteristics of laminar premixed CH{sub 4}/air opposed-jet flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C.-Y.; Chao, Y.-C.; Chen, C.-P.
2009-02-15
The effects of CO addition on the characteristics of premixed CH{sub 4}/air opposed-jet flames are investigated experimentally and numerically. Experimental measurements and numerical simulations of the flame front position, temperature, and velocity are performed in stoichiometric CH{sub 4}/CO/air opposed-jet flames with various CO contents in the fuel. Thermocouple is used for the determination of flame temperature, velocity measurement is made using particle image velocimetry (PIV), and the flame front position is measured by direct photograph as well as with laser-induced predissociative fluorescence (LIPF) of OH imaging techniques. The laminar burning velocity is calculated using the PREMIX code of Chemkin collectionmore » 3.5. The flame structures of the premixed stoichiometric CH{sub 4}/CO/air opposed-jet flames are simulated using the OPPDIF package with GRI-Mech 3.0 chemical kinetic mechanisms and detailed transport properties. The measured flame front position, temperature, and velocity of the stoichiometric CH{sub 4}/CO/air flames are closely predicted by the numerical calculations. Detailed analysis of the calculated chemical kinetic structures reveals that as the CO content in the fuel is increased from 0% to 80%, CO oxidation (R99) increases significantly and contributes to a significant level of heat-release rate. It is also shown that the laminar burning velocity reaches a maximum value (57.5 cm/s) at the condition of 80% of CO in the fuel. Based on the results of sensitivity analysis, the chemistry of CO consumption shifts to the dry oxidation kinetics when CO content is further increased over 80%. Comparison between the results of computed laminar burning velocity, flame temperature, CO consumption rate, and sensitivity analysis reveals that the effect of CO addition on the laminar burning velocity of the stoichiometric CH{sub 4}/CO/air flames is due mostly to the transition of the dominant chemical kinetic steps. (author)« less
Temporal model of an optically pumped co-doped solid state laser
NASA Technical Reports Server (NTRS)
Wangler, T. G.; Swetits, J. J.; Buoncristiani, A. M.
1993-01-01
Currently, research is being conducted on the optical properties of materials associated with the development of solid state lasers in the two micron region. In support of this effort, a mathematical model describing the energy transfer in a holmium laser sensitized with thulium is developed. In this paper, we establish some qualitative properties of the solution of the model, such as non-negativity, boundedness, and integrability. A local stability analysis is then performed from which conditions for asymptotic stability are attained. Finally, we report on our numerical analysis of the system and how it compares with experimental results.
Measures, R.; Hicks, D. M.; Brasington, J.
2016-01-01
Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477
NASA Astrophysics Data System (ADS)
Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos
2014-05-01
Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics
Williams, R D; Measures, R; Hicks, D M; Brasington, J
2016-08-01
Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.
Nestorov, I A; Aarons, L J; Rowland, M
1997-08-01
Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
Space station integrated wall design and penetration damage control
NASA Technical Reports Server (NTRS)
Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.
1987-01-01
The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.
Internal and forced eddy variability in the Labrador Sea
NASA Astrophysics Data System (ADS)
Bracco, A.; Luo, H.; Zhong, Y.; Lilly, J.
2009-04-01
Water mass transformation in the Labrador Sea, widely believed to be one of the key regions in the Atlantic Meridional Overturning Circulation (AMOC), now appears to be strongly impacted by vortex dynamics of the unstable boundary current. Large interannual variations in both eddy shedding and buoyancy transport from the boundary current have been observed but not explained, and are apparently sensitive to the state of the inflowing current. Heat and salinity fluxes associated with the eddies drive ventilation changes not accounted for by changes in local surface forcing, particularly during occasional years of extreme eddy activity, and constitute a predominant source of "internal" oceanic variability. The nature of this variable eddy-driven restratification is one of the outstanding questions along the northern transformation pathway. Here we investigate the eddy generation mechanism and the associated buoyancy fluxes by combining realistic and idealized numerical modeling, data analysis, and theory. Theory, supported by idealized experiments, provides criteria to test hypotheses as to the vortex formation process (by baroclinic instability linked to the bottom topography). Ensembles of numerical experiments with a high-resolution regional model (ROMS) allow for quantifying the sensitivity of eddy generation and property transport to variations in local and external forcing parameters. For the first time, we reproduce with a numerical simulation the observed interannual variability in the eddy kinetic energy in the convective region of the Labrador Basin and along the West Greenland Current.
Kairisto, V; Poola, A
1995-01-01
GraphROC for Windows is a program for clinical test evaluation. It was designed for the handling of large datasets obtained from clinical laboratory databases. In the user interface, graphical and numerical presentations are combined. For simplicity, numerical data is not shown unless requested. Relevant numbers can be "picked up" from the graph by simple mouse operations. Reference distributions can be displayed by using automatically optimized bin widths. Any percentile of the distribution with corresponding confidence limits can be chosen for display. In sensitivity-specificity analysis, both illness- and health-related distributions are shown in the same graph. The following data for any cutoff limit can be shown in a separate click window: clinical sensitivity and specificity with corresponding confidence limits, positive and negative likelihood ratios, positive and negative predictive values and efficiency. Predictive values and clinical efficiency of the cutoff limit can be updated for any prior probability of disease. Receiver Operating Characteristics (ROC) curves can be generated and combined into the same graph for comparison of several different tests. The area under the curve with corresponding confidence interval is calculated for each ROC curve. Numerical results of analyses and graphs can be printed or exported to other Microsoft Windows programs. GraphROC for Windows also employs a new method, developed by us, for the indirect estimation of health-related limits and change limits from mixed distributions of clinical laboratory data.
NASA Technical Reports Server (NTRS)
Towner, Robert L.; Band, Jonathan L.
2012-01-01
An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.
Actinic Flux Calculations: A Model Sensitivity Study
NASA Technical Reports Server (NTRS)
Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)
2000-01-01
calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.
Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J
2016-01-01
(1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.
Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.
2016-01-01
Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
2013-09-30
transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main
Stability and bifurcation for an SEIS epidemic model with the impact of media
NASA Astrophysics Data System (ADS)
Huo, Hai-Feng; Yang, Peng; Xiang, Hong
2018-01-01
A novel SEIS epidemic model with the impact of media is introduced. By analyzing the characteristic equation of equilibrium, the basic reproduction number is obtained and the stability of the steady states is proved. The occurrence of a forward, backward and Hopf bifurcation is derived. Numerical simulations and sensitivity analysis are performed. Our results manifest that media can regard as a good indicator in controlling the emergence and spread of the epidemic disease.
Extensions and applications of a second-order landsurface parameterization
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1983-01-01
Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.
[Interpretation of false positive results of biochemical prenatal tests].
Sieroszewski, Piotr; Słowakiewicz, Katarzyna; Perenc, Małgorzata
2010-03-01
Modern, non-invasive prenatal diagnostics based on biochemical and ultrasonographic markers of fetal defects allows us to calculate the risk of fetal chromosomal aneuploidies with high sensitivity and specificity An introduction of biochemical, non-invasive prenatal tests turned out to result in frequent false positive results of these tests in cases when invasive diagnostics does not confirm fetal defects. However prospective analysis of these cases showed numerous complications in the third trimester of the pregnancies.
NASA Astrophysics Data System (ADS)
Telle, H. H.; Beddows, D. C. S.; Morris, G. W.; Samek, O.
2001-06-01
In order to improve on analytical selectivity and sensitivity, the technique of laser-induced fluorescence spectroscopy (LIFS) was combined with laser-induced breakdown spectroscopy (LIBS). The main thrust of this investigation was to address analytical scenarios in which the measurement site may be difficult to access. Hence, a remote LIBS+LIFS arrangement was set up, and the experiments were carried out on samples surrounded by air at atmospheric pressure, rather than in a controlled buffer gas environment at reduced pressure. Representative for proof of principle, the detection of aluminium, chromium, iron and silicon at trace level concentrations was pursued. These elements are of importance in numerous chemical, medical and industrial applications, and they exhibit suitable resonance transitions, accessible by radiation from a pulsed Ti:sapphire laser system (its 2nd and 3rd harmonic outputs). All investigated elements have an energy level structure in which the laser-excited level is a member of a group of closely-spaced energy levels; thus, this allowed for easy off-resonant fluorescence detection (collisional energy transfer processes). Since numerous of the relevant transition wavelengths are within a narrow spectral interval, this opens the possibility for multi-element analysis; this was demonstrated here for Cr and Fe which were accessed by rapidly changing the tuneable laser wavelength.
NASA Astrophysics Data System (ADS)
Springer, H. Keo
2017-06-01
Advanced manufacturing techniques offer control of explosive mesostructures necessary to tailor its shock sensitivity. However, structure-property relationships are not well established for explosives so there is little material design guidance for these techniques. The objective of this numerical study is to demonstrate how TATB-based explosives can be sensitized to shocks using mesostructural features. For this study, we use LX-17 (92.5%wt TATB, 7.5%wt Kel-F 800) as the prototypical TATB-based explosive. We employ features with different geometries and materials. HMX-based explosive features, high shock impedance features, and pores are used to sensitive the LX-17. Simulations are performed in the multi-physics hydrocode, ALE3D. A reactive flow model is used to simulate the shock initiation response of the explosives. Our metric for shock sensitivity in this study is run distance to detonation as a function of applied pressure. These numerical studies are important because they guide the design of novel energetic materials. This work was performed under the auspices of the United States Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-724986.
NASA Astrophysics Data System (ADS)
Grossir, Guillaume; Van Hove, Bart; Paris, Sébastien; Rambaud, Patrick; Chazot, Olivier
2016-05-01
The performance of fast-response slender static pressure probes is evaluated in the short-duration, cold-gas, VKI Longshot hypersonic wind tunnel. Free-stream Mach numbers range between 9.5 and 12, and unit Reynolds numbers are within 3-10 × 106/m. Absolute pressure sensors are fitted within the probes, and an inexpensive calibration method, suited to low static pressure environments (200-1000 Pa), is described. Transfer functions relating the probe measurements p w to the free-stream static pressure p ∞ are established for the Longshot flow conditions based on numerical simulations. The pressure ratios p w / p ∞ are found to be close to unity for both laminar and turbulent boundary layers. Weak viscous effects characterized by small viscous interaction parameters {bar{χ }}<1.5 are confirmed experimentally for probe aspect ratios of L/ D > 16.5 by installing multiple pressure sensors in a single probe. The effect of pressure orifice geometry is also evaluated experimentally and found to be negligible for either straight or chamfered holes, 0.6-1 mm in diameter. No sensitivity to probe angle of attack could be evidenced for α < 0.33°. Pressure measurements are compared to theoretical predictions assuming an isentropic nozzle flow expansion. Significant deviations from this ideal case and the Mach 14 contoured nozzle design are uncovered. Validation of the static pressure measurements is obtained by comparing shock wave locations on Schlieren photographs to numerical predictions using free-stream properties derived from the static pressure probes. While these results apply to the Longshot wind tunnel, the present methodology and sensitivity analysis can guide similar investigations for other hypersonic test facilities.
Research on the ϕ-OTDR fiber sensor sensitive for all of the distance
NASA Astrophysics Data System (ADS)
Kong, Yong; Liu, Yang; Shi, Yi; Ansari, Farhad; Taylor, Todd
2018-01-01
In this paper, a modified construction for the traditional ϕ-OTDR fiber sensor sensitive for all of distance is presented, the related numerical simulation and experiment analysis results show that this construction can reduce the gain imbalance for all of the distance along the fiber caused by the Rayleigh scattering loss of the fiber and the gain imbalance of Raman fiber amplifier in this fiber sensor system. In order to improve further the vibration sensitivity of this system, the possible methods to restrain the influences of modulation instability effect, Stimulated Brillouin effect, reduce the amplified spontaneous emission (ASE) noises of Raman laser (RL) and Erbium3+-doped fiber amplifiers (EDFA), double Rayleigh backscattering noise in this system are discussed, which will offer a great reference value for the science research and engineering application in the field of fiber sensor as we believe.
Probing CP violation in $$h\\rightarrow\\gamma\\gamma$$ with converted photons
Bishara, Fady; Grossman, Yuval; Harnik, Roni; ...
2014-04-11
We study Higgs diphoton decays, in which both photons undergo nuclear conversion to electron- positron pairs. The kinematic distribution of the two electron-positron pairs may be used to probe the CP violating (CPV) coupling of the Higgs to photons, that may be produced by new physics. Detecting CPV in this manner requires interference between the spin-polarized helicity amplitudes for both conversions. We derive leading order, analytic forms for these amplitudes. In turn, we obtain compact, leading-order expressions for the full process rate. While performing experiments involving photon conversions may be challenging, we use the results of our analysis to constructmore » experimental cuts on certain observables that may enhance sensitivity to CPV. We show that there exist regions of phase space on which sensitivity to CPV is of order unity. As a result, the statistical sensitivity of these cuts are verified numerically, using dedicated Monte-Carlo simulations.« less
Securing Sensitive Flight and Engine Simulation Data Using Smart Card Technology
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2003-01-01
NASA Glenn Research Center has developed a smart card prototype capable of encrypting and decrypting disk files required to run a distributed aerospace propulsion simulation. Triple Data Encryption Standard (3DES) encryption is used to secure the sensitive intellectual property on disk pre, during, and post simulation execution. The prototype operates as a secure system and maintains its authorized state by safely storing and permanently retaining the encryption keys only on the smart card. The prototype is capable of authenticating a single smart card user and includes pre simulation and post simulation tools for analysis and training purposes. The prototype's design is highly generic and can be used to protect any sensitive disk files with growth capability to urn multiple simulations. The NASA computer engineer developed the prototype on an interoperable programming environment to enable porting to other Numerical Propulsion System Simulation (NPSS) capable operating system environments.
The Impact of Temperatures on the Stability of Rocks Surrounding a Single Fracture
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Ning; Dai, Jun
2018-05-01
Research on the influence of temperature and the accompanying stress on the stability of the rocks surrounding an underground tunnel has become ever more important. This paper constructs a geometric model of a single-fracture tunnel by combining a high-temperature underground tunnel as the object of study with an example that uses a high-temperature tunnel segment in the water diversion tunnel of a hydropower station in Xinjiang. Based on the relevant theoretical analysis, with the consideration of different working conditions, a numerical experimental analysis was conducted to determine the two-dimensional transient temperature field distribution of the tunnel rock mass by using a numerical analysis software. The experimental data was consistent with the measured data. The calculated results show the following: a. when the temperature difference is greater, the stress concentration is higher near the fracture of the surrounding rock; b. the degree of the stress concentration in the crack tip region is not positively correlated to the distance, and there is a sensitive region where the stress varies.
Simulation of Shear and Bending Cracking in RC Beam: Material Model and its Application to Impact
NASA Astrophysics Data System (ADS)
Mokhatar, S. N.; Sonoda, Y.; Zuki, S. S. M.; Kamarudin, A. F.; Noh, M. S. Md
2018-04-01
This paper presents a simple and reliable non-linear numerical analysis incorporated with fully Lagrangian method namely Smoothed Particle Hydrodynamics (SPH) to predict the impact response of the reinforced concrete (RC) beam under impact loading. The analysis includes the simulation of the effects of high mass low-velocity impact load falling on beam structures. Three basic ideas to present the localized failure of structural elements are: (1) the accurate strength of concrete and steel reinforcement during the short period (dynamic), Dynamic Increase Factor (DIF) has been employed for the effect of strain rate on the compression and tensile strength (2) linear pressure-sensitive yield criteria (Drucker-Prager type) with a new volume dependent Plane-Cap (PC) hardening in the pre-peak regime is assumed for the concrete, meanwhile, shear-strain energy criterion (Von-Mises) is applied to steel reinforcement (3) two kinds of constitutive equation are introduced to simulate the crushing and bending cracking of the beam elements. Then, these numerical analysis results were compared with the experimental test results.
NASA Astrophysics Data System (ADS)
Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.
Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap and EdZ. Estimates of the effective diffusion coefficient and the porosity of clay are highly correlated, indicating that these parameters cannot be estimated simultaneously. Accurate estimation of De and porosities of clay and EdZ is only possible when the standard deviation of random noise is less than 0.01. Small errors in the volume of the circulation system do not affect clay parameter estimates. Normalized sensitivities as well as the identifiability analysis of synthetic experiments provide additional insight on inverse estimation of in situ diffusion experiments and will be of great benefit for the interpretation of real DIR in situ diffusion experiments.
NASA Astrophysics Data System (ADS)
Meliga, Philippe
2017-07-01
We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Accuracy of the domain method for the material derivative approach to shape design sensitivities
NASA Technical Reports Server (NTRS)
Yang, R. J.; Botkin, M. E.
1987-01-01
Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.
Aerodynamic parameter studies and sensitivity analysis for rotor blades in axial flight
NASA Technical Reports Server (NTRS)
Chiu, Y. Danny; Peters, David A.
1991-01-01
The analytical capability is offered for aerodynamic parametric studies and sensitivity analyses of rotary wings in axial flight by using a 3-D undistorted wake model in curved lifting line theory. The governing equations are solved by both the Multhopp Interpolation technique and the Vortex Lattice method. The singularity from the bound vortices is eliminated through the Hadamard's finite part concept. Good numerical agreement between both analytical methods and finite differences methods are found. Parametric studies were made to assess the effects of several shape variables on aerodynamic loads. It is found, e.g., that a rotor blade with out-of-plane and inplane curvature can theoretically increase lift in the inboard and outboard regions respectively without introducing an additional induced drag.
Reducing microwave absorption with fast frequency modulation.
Qin, Juehang; Hubler, A
2017-05-01
We study the response of a two-level quantum system to a chirp signal, using both numerical and analytical methods. The numerical method is based on numerical solutions of the Schrödinger solution of the two-level system, while the analytical method is based on an approximate solution of the same equations. We find that when two-level systems are perturbed by a chirp signal, the peak population of the initially unpopulated state exhibits a high sensitivity to frequency modulation rate. We also find that the aforementioned sensitivity depends on the strength of the forcing, and weaker forcings result in a higher sensitivity, where the frequency modulation rate required to produce the same reduction in peak population would be lower. We discuss potential applications of this result in the field of microwave power transmission, as it shows applying fast frequency modulation to transmitted microwaves used for power transmission could decrease unintended absorption of microwaves by organic tissue.
NASA Astrophysics Data System (ADS)
Hrubesova, E.; Lahuta, H.; Mohyla, M.; Quang, T. B.; Phi, N. D.
2018-04-01
The paper is focused on the sensitivity analysis of behaviour of the subsoil – foundation system as regards the variant properties of fibre-concrete slab resulting into different relative stiffness of the whole cooperating system. The character of slab and its properties are very important for the character of external load transfer, but the character of subsoil cannot be neglected either because it determines the stress-strain behaviour of the all system and consequently the bearing capacity of structure. The sensitivity analysis was carried out based on experimental results, which include both the stress values in soil below the foundation structure and settlements of structure, characterized by different quantity of fibres in it. Flat dynamometers GEOKON were used for the stress measurements below the observed slab, the strains inside slab were registered by tensometers, the settlements were monitored geodetically. The paper is focused on the comparison of soil stresses below the slab for different quantity of fibres in structure. The results obtained from the experimental stand can contribute to more objective knowledge of soil – slab interaction, to the evaluation of real carrying capacity of the slab, to the calibration of corresponding numerical models, to the optimization of quantity of fibres in the slab, and finally, to higher safety and more economical design of slab.
Digression and Value Concatenation to Enable Privacy-Preserving Regression.
Li, Xiao-Bai; Sarkar, Sumit
2014-09-01
Regression techniques can be used not only for legitimate data analysis, but also to infer private information about individuals. In this paper, we demonstrate that regression trees, a popular data-analysis and data-mining technique, can be used to effectively reveal individuals' sensitive data. This problem, which we call a "regression attack," has not been addressed in the data privacy literature, and existing privacy-preserving techniques are not appropriate in coping with this problem. We propose a new approach to counter regression attacks. To protect against privacy disclosure, our approach introduces a novel measure, called digression , which assesses the sensitive value disclosure risk in the process of building a regression tree model. Specifically, we develop an algorithm that uses the measure for pruning the tree to limit disclosure of sensitive data. We also propose a dynamic value-concatenation method for anonymizing data, which better preserves data utility than a user-defined generalization scheme commonly used in existing approaches. Our approach can be used for anonymizing both numeric and categorical data. An experimental study is conducted using real-world financial, economic and healthcare data. The results of the experiments demonstrate that the proposed approach is very effective in protecting data privacy while preserving data quality for research and analysis.
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1994-01-01
LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Jeong, Yoseok; Lee, Jaeha; Kim, WooSeok
2015-01-29
This paper aims at presenting the effects of short-term sustained load and temperature on time-dependent deformation of carbon fiber-reinforced polymer (CFRP) bonded to concrete and pull-off strength at room temperature after the sustained loading period. The approach involves experimental and numerical analysis. Single-lap shear specimens were used to evaluate temperature and short-term sustained loading effects on time-dependent behavior under sustained loading and debonding behavior under pull-off loading after a sustained loading period. The numerical model was parameterized with experiments on the concrete, FRP, and epoxy. Good correlation was seen between the numerical results and single-lap shear experiments. Sensitivity studies shed light on the influence of temperature, epoxy modulus, and epoxy thickness on the redistribution of interfacial shear stress during sustained loading. This investigation confirms the hypothesis that interfacial stress redistribution can occur due to sustained load and elevated temperature and its effect can be significant.
Finite Element Based Optimization of Material Parameters for Enhanced Ballistic Protection
NASA Astrophysics Data System (ADS)
Ramezani, Arash; Huber, Daniel; Rothe, Hendrik
2013-06-01
The threat imposed by terrorist attacks is a major hazard for military installations, vehicles and other items. The large amounts of firearms and projectiles that are available, pose serious threats to military forces and even civilian facilities. An important task for international research and development is to avert danger to life and limb. This work will evaluate the effect of modern armor with numerical simulations. It will also provide a brief overview of ballistic tests in order to offer some basic knowledge of the subject, serving as a basis for the comparison of simulation results. The objective of this work is to develop and improve the modern armor used in the security sector. Numerical simulations should replace the expensive ballistic tests and find vulnerabilities of items and structures. By progressively changing the material parameters, the armor is to be optimized. Using a sensitivity analysis, information regarding decisive variables is yielded and vulnerabilities are easily found and eliminated afterwards. To facilitate the simulation, advanced numerical techniques have been employed in the analyses.
Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise
NASA Technical Reports Server (NTRS)
Housman, Jeffrey A.; Kiris, Cetin C.
2015-01-01
Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.
Numerical algorithms for computations of feedback laws arising in control of flexible systems
NASA Technical Reports Server (NTRS)
Lasiecka, Irena
1989-01-01
Several continuous models will be examined, which describe flexible structures with boundary or point control/observation. Issues related to the computation of feedback laws are examined (particularly stabilizing feedbacks) with sensors and actuators located either on the boundary or at specific point locations of the structure. One of the main difficulties is due to the great sensitivity of the system (hyperbolic systems with unbounded control actions), with respect to perturbations caused either by uncertainty of the model or by the errors introduced in implementing numerical algorithms. Thus, special care must be taken in the choice of the appropriate numerical schemes which eventually lead to implementable finite dimensional solutions. Finite dimensional algorithms are constructed on a basis of a priority analysis of the properties of the original, continuous (infinite diversional) systems with the following criteria in mind: (1) convergence and stability of the algorithms and (2) robustness (reasonable insensitivity with respect to the unknown parameters of the systems). Examples with mixed finite element methods and spectral methods are provided.
Jeong, Yoseok; Lee, Jaeha; Kim, WooSeok
2015-01-01
This paper aims at presenting the effects of short-term sustained load and temperature on time-dependent deformation of carbon fiber-reinforced polymer (CFRP) bonded to concrete and pull-off strength at room temperature after the sustained loading period. The approach involves experimental and numerical analysis. Single-lap shear specimens were used to evaluate temperature and short-term sustained loading effects on time-dependent behavior under sustained loading and debonding behavior under pull-off loading after a sustained loading period. The numerical model was parameterized with experiments on the concrete, FRP, and epoxy. Good correlation was seen between the numerical results and single-lap shear experiments. Sensitivity studies shed light on the influence of temperature, epoxy modulus, and epoxy thickness on the redistribution of interfacial shear stress during sustained loading. This investigation confirms the hypothesis that interfacial stress redistribution can occur due to sustained load and elevated temperature and its effect can be significant. PMID:28787948
Flexible aircraft dynamic modeling for dynamic analysis and control synthesis
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1989-01-01
The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.
NASA Technical Reports Server (NTRS)
Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.
2013-01-01
A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.
1987-01-01
This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
Non-robust numerical simulations of analogue extension experiments
NASA Astrophysics Data System (ADS)
Naliboff, John; Buiter, Susanne
2016-04-01
Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.
NASA Astrophysics Data System (ADS)
Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor
2013-04-01
A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.
Analysis of all-optical temporal integrator employing phased-shifted DFB-SOA.
Jia, Xin-Hong; Ji, Xiao-Ling; Xu, Cong; Wang, Zi-Nan; Zhang, Wei-Li
2014-11-17
All-optical temporal integrator using phase-shifted distributed-feedback semiconductor optical amplifier (DFB-SOA) is investigated. The influences of system parameters on its energy transmittance and integration error are explored in detail. The numerical analysis shows that, enhanced energy transmittance and integration time window can be simultaneously achieved by increased injected current in the vicinity of lasing threshold. We find that the range of input pulse-width with lower integration error is highly sensitive to the injected optical power, due to gain saturation and induced detuning deviation mechanism. The initial frequency detuning should also be carefully chosen to suppress the integration deviation with ideal waveform output.
Local numerical modelling of ultrasonic guided waves in linear and nonlinear media
NASA Astrophysics Data System (ADS)
Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.
2017-04-01
Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.
NASA Astrophysics Data System (ADS)
Vijayashree, M.; Uthayakumar, R.
2017-09-01
Lead time is one of the major limits that affect planning at every stage of the supply chain system. In this paper, we study a continuous review inventory model. This paper investigates the ordering cost reductions are dependent on lead time. This study addressed two-echelon supply chain problem consisting of a single vendor and a single buyer. The main contribution of this study is that the integrated total cost of the single vendor and the single buyer integrated system is analyzed by adopting two different (linear and logarithmic) types ordering cost reductions act dependent on lead time. In both cases, we develop effective solution procedures for finding the optimal solution and then illustrative numerical examples are given to illustrate the results. The solution procedure is to determine the optimal solutions of order quantity, ordering cost, lead time and the number of deliveries from the single vendor and the single buyer in one production run, so that the integrated total cost incurred has the minimum value. Ordering cost reduction is the main aspect of the proposed model. A numerical example is given to validate the model. Numerical example solved by using Matlab software. The mathematical model is solved analytically by minimizing the integrated total cost. Furthermore, the sensitivity analysis is included and the numerical examples are given to illustrate the results. The results obtained in this paper are illustrated with the help of numerical examples. The sensitivity of the proposed model has been checked with respect to the various major parameters of the system. Results reveal that the proposed integrated inventory model is more applicable for the supply chain manufacturing system. For each case, an algorithm procedure of finding the optimal solution is developed. Finally, the graphical representation is presented to illustrate the proposed model and also include the computer flowchart in each model.
NASA Astrophysics Data System (ADS)
Tang, Jinyun; Riley, William J.; Niu, Jie
2015-12-01
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in better agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d-1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.
A FEM-based method to determine the complex material properties of piezoelectric disks.
Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C
2014-08-01
Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Jinyun; Riley, William J.; Niu, Jie
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in bettermore » agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d -1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.« less
Tang, Jinyun; Riley, William J.; Niu, Jie
2015-11-12
We implemented the Amenu-Kumar model in the Community Land Model (CLM4.5) to simulate plant Root Hydraulic Redistribution (RHR) and analyzed its influence on CLM hydrology from site to global scales. We evaluated two numerical implementations: the first solved the coupled equations of root and soil water transport concurrently, while the second solved the two equations sequentially. Through sensitivity analysis, we demonstrate that the sequentially coupled implementation (SCI) is numerically incorrect, whereas the tightly coupled implementation (TCI) is numerically robust with numerical time steps varying from 1 to 30 min. At the site-level, we found the SCI approach resulted in bettermore » agreement with measured evapotranspiration (ET) at the AmeriFlux Blodgett Forest site, California, whereas the two approaches resulted in equally poor agreement between predicted and measured ET at the LBA Tapajos KM67 Mature Forest site in Amazon, Brazil. Globally, the SCI approach overestimated annual land ET by as much as 3.5 mm d -1 in some grid cells when compared to the TCI estimates. These comparisons demonstrate that TCI is a more robust numerical implementation of RHR. However, we found, even with TCI, that incorporating RHR resulted in worse agreement with measured soil moisture at both the Blodgett Forest and Tapajos sites and degraded the agreement between simulated terrestrial water storage anomaly and Gravity Recovery and Climate Experiment (GRACE) observations. We find including RHR in CLM4.5 improved ET predictions compared with the FLUXNET-MTE estimates north of 20° N but led to poorer predictions in the tropics. The biases in ET were robust and significant regardless of the four different pedotransfer functions or of the two meteorological forcing data sets we applied. We also found that the simulated water table was unrealistically sensitive to RHR. Therefore, we contend that further structural and data improvements are warranted to improve the hydrological dynamics in CLM4.5.« less
Analysis of the Characteristics of Inertia-Gravity Waves during an Orographic Precipitation Event
NASA Astrophysics Data System (ADS)
Liu, Lu; Ran, Lingkun; Gao, Shouting
2018-05-01
A numerical experiment was performed using the Weather Research and Forecasting (WRF) model to analyze the generation and propagation of inertia-gravity waves during an orographic rainstorm that occurred in the Sichuan area on 17 August 2014. To examine the spatial and temporal structures of the inertia-gravity waves and identify the wave types, three wavenumber-frequency spectral analysis methods (Fourier analysis, cross-spectral analysis, and wavelet cross-spectrum analysis) were applied. During the storm, inertia-gravity waves appeared at heights of 10-14 km, with periods of 80-100 min and wavelengths of 40-50 km. These waves were generated over a mountain and propagated eastward at an average speed of 15-20 m s-1. Meanwhile, comparison between the reconstructed inertia-gravity waves and accumulated precipitation showed there was a mutual promotion process between them. The Richardson number and Scorer parameter were used to demonstrate that the eastward-moving inertia-gravity waves were trapped in an effective atmospheric ducting zone with favorable reflector and critical level conditions, which were the primary causes of the long lives of the waves. Finally, numerical experiments to test the sensitivity to terrain and diabatic heating were conducted, and the results suggested a cooperative effect of terrain and diabatic heating contributed to the propagation and enhancement of the waves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy; Bhat, Sham; Marcy, Peter
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
Holland, Troy; Bhat, Sham; Marcy, Peter; ...
2017-08-25
Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.
1999-01-01
This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.
Neural Tuning to Numerosity Relates to Perceptual Tuning in 3-6-Year-Old Children.
Kersey, Alyssa J; Cantlon, Jessica F
2017-01-18
Neural representations of approximate numerical value, or numerosity, have been observed in the intraparietal sulcus (IPS) in monkeys and humans, including children. Using functional magnetic resonance imaging, we show that children as young as 3-4 years old exhibit neural tuning to cardinal numerosities in the IPS and that their neural responses are accounted for by a model of numerosity coding that has been used to explain neural responses in the adult IPS. We also found that the sensitivity of children's neural tuning to number in the right IPS was comparable to their numerical discrimination sensitivity observed behaviorally, outside of the scanner. Children's neural tuning curves in the right IPS were significantly sharper than in the left IPS, indicating that numerical representations are more precise and mature more rapidly in the right hemisphere than in the left. Further, we show that children's perceptual sensitivity to numerosity can be predicted by the development of their neural sensitivity to numerosity. This research provides novel evidence of developmental continuity in the neural code underlying numerical representation and demonstrates that children's neural sensitivity to numerosity is related to their cognitive development. Here we test for the existence of neural tuning to numerosity in the developing brain in the youngest sample of children tested with fMRI to date. Although previous research shows evidence of numerical distance effects in the intraparietal sulcus of the developing brain, those effects could be explained by patterns of neural activity that do not represent neural tuning to numerosity. These data provide the first robust evidence that from as early as 3-4 years of age there is developmental continuity in how the intraparietal sulcus represents the values of numerosities. Moreover, the study goes beyond previous research by examining the relation between neural tuning and perceptual tuning in children. Copyright © 2017 the authors 0270-6474/17/370512-11$15.00/0.
Hoffman, Jessica M; Soltow, Quinlyn A; Li, Shuzhao; Sidik, Alfire; Jones, Dean P; Promislow, Daniel E L
2014-01-01
Researchers have used whole-genome sequencing and gene expression profiling to identify genes associated with age, in the hope of understanding the underlying mechanisms of senescence. But there is a substantial gap from variation in gene sequences and expression levels to variation in age or life expectancy. In an attempt to bridge this gap, here we describe the effects of age, sex, genotype, and their interactions on high-sensitivity metabolomic profiles in the fruit fly, Drosophila melanogaster. Among the 6800 features analyzed, we found that over one-quarter of all metabolites were significantly associated with age, sex, genotype, or their interactions, and multivariate analysis shows that individual metabolomic profiles are highly predictive of these traits. Using a metabolomic equivalent of gene set enrichment analysis, we identified numerous metabolic pathways that were enriched among metabolites associated with age, sex, and genotype, including pathways involving sugar and glycerophospholipid metabolism, neurotransmitters, amino acids, and the carnitine shuttle. Our results suggest that high-sensitivity metabolomic studies have excellent potential not only to reveal mechanisms that lead to senescence, but also to help us understand differences in patterns of aging among genotypes and between males and females. PMID:24636523
Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.
Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
NASA Technical Reports Server (NTRS)
Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)
2001-01-01
In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Development of techniques for the analysis of isoflavones in soy foods and nutraceuticals.
Dentith, Susan; Lockwood, Brian
2008-05-01
For over 20 years, soy isoflavones have been investigated for their ability to prevent a wide range of cancers and cardiovascular problems, and numerous other disease states. This research is underpinned by the ability of researchers to analyse isoflavones in various forms in a range of raw materials and biological fluids. This review summarizes the techniques recently used in their analysis. The speed of high performance liquid chromatography analysis has been improved, allowing analysis of more samples, and increasing the sensitivity of detection techniques allows quantification of isoflavones down to nanomoles per litre levels in biological fluids. The combination of high-performance liquid chromatography with immunoassay has allowed identification and estimation of low-level soy isoflavones. The use of soy isoflavone supplements has shown an increase in their circulating levels in plasma and urine, aiding investigation of their biological effects. The significance of the metabolite equol has spurned research into new areas, and recently the specific enantiomers have been studied. High-performance liquid chromatography, capillary electrophoresis and gas chromatography are widely used with a range of detection systems. Increasingly, immunoassay is being used because of its high sensitivity and low cost.
Synoptic analysis and hindcast of an intense bow echo in Western Europe: The 09 June 2014 storm
NASA Astrophysics Data System (ADS)
Mathias, Luca; Ermert, Volker; Kelemen, Fanni D.; Ludwig, Patrick; Pinto, Joaquim G.
2017-04-01
On Pentecost Monday of 09 June 2014, a severe mesoscale convective system (MCS) hit Belgium and Western Germany. This storm was one of the most severe thunderstorms in Germany for decades. The synoptic-scale and mesoscale characteristics of this storm are analyzed based on remote sensing data and in-situ measurements. Moreover, the forecast potential of the storm is evaluated using sensitivity experiments with a regional climate model. The key ingredients for the development of the Pentecost storm were the concurrent presence of low-level moisture, atmospheric conditional instability and wind shear. The synoptic and mesoscale analysis shows that the outflow of a decaying MCS above northern France triggered the storm, which exhibited the typical features of a bow echo like a mesovortex and rear inflow jet. This resulted in hurricane-force wind gusts (reaching 40 m/s) along a narrow swath in the Rhine-Ruhr region leading to substantial damage. Operational numerical weather predictions models mostly failed to forecast the storm, but high-resolution regional model hindcasts enable a realistic simulation of the storm. The model experiments reveal that the development of the bow echo is particularly sensitive to the initial wind field and the lower tropospheric moisture content. Correct initial and boundary conditions are therefore necessary for realistic numerical forecasts of such a bow echo event. We conclude that the Pentecost storm exhibited a comparable structure and a similar intensity to the observed bow echo systems in the United States.
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac
1998-01-01
Ten papers, published in various publications, on buckling, and the effects of imperfections on various structures are presented. These papers are: (1) Buckling mode localization in elastic plates due to misplacement in the stiffner location; (2) On vibrational imperfection sensitivity on Augusti's model structure in the vicinity of a non-linear static state; (3) Imperfection sensitivity due to elastic moduli in the Roorda Koiter frame; (4) Buckling mode localization in a multi-span periodic structure with a disorder in a single span; (5) Prediction of natural frequency and buckling load variability due to uncertainty in material properties by convex modeling; (6) Derivation of multi-dimensional ellipsoidal convex model for experimental data; (7) Passive control of buckling deformation via Anderson localization phenomenon; (8)Effect of the thickness and initial im perfection on buckling on composite cylindrical shells: asymptotic analysis and numerical results by BOSOR4 and PANDA2; (9) Worst case estimation of homology design by convex analysis; (10) Buckling of structures with uncertain imperfections - Personal perspective.
NASA Astrophysics Data System (ADS)
Demir, Alper
2005-08-01
Oscillators are key components of many kinds of systems, particularly electronic and opto-electronic systems. Undesired perturbations, i.e. noise, that exist in practical systems adversely affect the spectral and timing properties of the signals generated by oscillators resulting in phase noise and timing jitter. These are key performance limiting factors, being major contributors to bit-error-rate (BER) of RF and optical communication systems, and creating synchronization problems in clocked and sampled-data electronic systems. In noise analysis for oscillators, the key is figuring out how the various disturbances and noise sources in the oscillator end up as phase fluctuations. In doing so, one first computes transfer functions from the noise sources to the oscillator phase, or the sensitivity of the oscillator phase to these noise sources. In this paper, we first provide a discussion explaining the origins and the proper definition of this transfer or sensitivity function, followed by a critical review of the various numerical techniques for its computation that have been proposed by various authors over the past fifteen years.
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
NASA Astrophysics Data System (ADS)
Terrien, Soizic; Krauskopf, Bernd; Broderick, Neil G. R.; Andréoli, Louis; Selmi, Foued; Braive, Rémy; Beaudoin, Grégoire; Sagnes, Isabelle; Barbay, Sylvain
2017-10-01
A semiconductor micropillar laser with delayed optical feedback is considered. In the excitable regime, we show that a single optical perturbation can trigger a train of pulses that is sustained for a finite duration. The distribution of the pulse train duration exhibits an exponential behavior characteristic of a noise-induced process driven by uncorrelated white noise present in the system. The comparison of experimental observations with theoretical and numerical analysis of a minimal model yields excellent agreement. Importantly, the random switch-off process takes place between two attractors of different nature: an equilibrium and a periodic orbit. Our analysis shows that there is a small time window during which the pulsations are very sensitive to noise, and this explains the observed strong bias toward switch-off. These results raise the possibility of all optical control of the pulse train duration that may have an impact for practical applications in photonics and may also apply to the dynamics of other noise-driven excitable systems with delayed feedback.
An adjoint-based sensitivity analysis of thermoacoustic network models
NASA Astrophysics Data System (ADS)
Sogaro, Francesca; Morgans, Aimee; Schmid, Peter
2017-11-01
Thermoacoustic instability is a phenomenon that occurs in numerous combustion systems, from rockets to land-based gas turbines. The acoustic oscillations of these systems are of significant importance as they can result in severe vibrations, thrust oscillations, thermal stresses and mechanical loads that lead to fatigue or even failure. In this work we use a low-order network model representation of a combustor system where linear acoustics are solved together with the appropriate boundary conditions, area change jump conditions, acoustic dampers and an appropriate flame transfer function. Special emphasis is directed towards the interaction between acoustically driven instabilities and flame-intrinsic modes. Adjoint methods are used to perform a sensitivity analysis of the spectral properties of the system to changes in the parameters involved. An exchange of modal identity between acoustic and intrinsic modes will be demonstrated and analyzed. The results provide insight into the interplay between various mode types and build a quantitative foundation for the design of combustors.
Probabilistic analysis of a materially nonlinear structure
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.
1990-01-01
A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.
Modeling CO2 degassing and pH in a stream-aquifer system
Choi, J.; Hulseapple, S.M.; Conklin, M.H.; Harvey, J.W.
1998-01-01
Pinal Creek, Arizona receives an inflow of ground water with high dissolved inorganic carbon (57-75 mg/l) and low pH (5.8-6.3). There is an observed increase of in-stream pH from approximately 6.0-7.8 over the 3 km downstream of the point of groundwater inflow. We hypothesized that CO2 gas-exchange was the most important factor causing the pH increase in this stream-aquifer system. An existing transport model, for coupled ground water-surface water systems (OTIS), was modified to include carbonate equilibria and CO2 degassing, used to simulate alkalinity, total dissolved inorganic carbon (C(T)), and pH in Pinal Creek. Because of the non-linear relation between pH and C(T), the modified transport model used the numerical iteration method to solve the non-linearity. The transport model parameters were determined by the injection of two tracers, bromide and propane. The resulting simulations of alkalinity, C(T) and pH reproduced, without fitting, the overall trends in downstream concentrations. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that C(T) and pH in stream water were controlled by the mixing of ground water with stream water and CO2 degassing. The relative importance of these two processes varied spatially depending on the hydrologic conditions, such as stream flow velocity and whether a reach gained or lost stream water caused by the interaction with the ground water. The coupled transport model with CO2 degassing and generalized sensitivity analysis presented in this study can be applied to evaluate carbon transport and pH in other coupled stream-ground water systems.An existing transport model for coupled groundwater-surface water systems was modified to include carbonate equilibria and CO2 degassing. The modified model was used to simulate alkalinity, total dissolved inorganic carbon (CT) and pH in Pinal Creek. The model used the numerical iteration method to solve the nonlinear relation between pH and CT. A multi-parametric sensitivity analysis (MPSA) was used to identify the relative sensitivities of the predictions to six of the physical and chemical parameters used in the transport model. MPSA results implied that CT and pH in the stream water were controlled by the mixing of groundwater with stream water and CO2 degassing.
Aviation noise overload in the immediate proximity of the Warsaw-Okecie airport
NASA Astrophysics Data System (ADS)
Koszarny, Z.; Maziarka, S.
1981-05-01
The results are presented for investigations on noise overload around the Warszawa-Okecie airport on persons inhabiting the area where it exceeds 100 dB for a single aircraft flight. Of 256 subjects, 91.1 percent complained about aircraft noise overload. In the population studied considerable differences were noted respecting the subjective sensitivity scale. Statistical analysis showed numerous correlations between the individual noise sensitivity threshold and the subject's state of health, age, sex, type of work, etc. At the same time investigations demonstrated various forms and levels of disturbance in the organism for individual subjects and groups. The most frequent complaint was chronic fatigue (68.1 percent), followed by nervousness (36.6 percent), frequent headaches (36.2 percent), hearing disturbances (30.0 percent) and sleep disorders (23.9 percent).
Linear Strength Vortex Panel Method for NACA 4412 Airfoil
NASA Astrophysics Data System (ADS)
Liu, Han
2018-03-01
The objective of this article is to formulate numerical models for two-dimensional potential flow over the NACA 4412 Airfoil using linear vortex panel methods. By satisfying the no penetration boundary condition and Kutta condition, the circulation density on each boundary points (end point of every panel) are obtained and according to which, surface pressure distribution and lift coefficients of the airfoil are predicted and validated by Xfoil, an interactive program for the design and analysis of airfoil. The sensitivity of results to the number of panels is also investigated in the end, which shows that the results are sensitive to the number of panels when panel number ranges from 10 to 160. With the increasing panel number (N>160), the results become relatively insensitive to it.
Aviation noise overload in the immediate proximity of the Warsaw-Okecie airport
NASA Technical Reports Server (NTRS)
Koszarny, Z.; Maziarka, S.
1981-01-01
The results are presented for investigations on noise overload around the Warszawa-Okecie airport on persons inhabiting the area where it exceeds 100 dB for a single aircraft flight. Of 256 subjects, 91.1 percent complained about aircraft noise overload. In the population studied considerable differences were noted respecting the subjective sensitivity scale. Statistical analysis showed numerous correlations between the individual noise sensitivity threshold and the subject's state of health, age, sex, type of work, etc. At the same time investigations demonstrated various forms and levels of disturbance in the organism for individual subjects and groups. The most frequent complaint was chronic fatigue (68.1 percent), followed by nervousness (36.6 percent), frequent headaches (36.2 percent), hearing disturbances (30.0 percent) and sleep disorders (23.9 percent).
NASA Astrophysics Data System (ADS)
Liu, Chao; Wang, Famei; Zheng, Shijie; Sun, Tao; Lv, Jingwei; Liu, Qiang; Yang, Lin; Mu, Haiwei; Chu, Paul K.
2016-07-01
A highly birefringent photonic crystal fibre is proposed and characterized based on a surface plasmon resonance sensor. The birefringence of the sensor is numerically analyzed by the finite-element method. In the numerical simulation, the resonance wavelength can be directly positioned at this birefringence abrupt change point and the depth of the abrupt change of birefringence reflects the intensity of excited surface plasmon. Consequently, the novel approach can accurately locate the resonance peak of the system without analyzing the loss spectrum. Simulated average sensitivity is as high as 1131 nm/RIU, corresponding to a resolution of 1 × 10-4 RIU in this sensor. Therefore, results obtained via the approach not only show polarization independence and less noble metal consumption, but also reveal better performance in terms of accuracy and computation efficiency.
Accuracy of i-Scan for Optical Diagnosis of Colonic Polyps: A Meta-Analysis
Guo, Chuan-Guo; Ji, Rui; Li, Yan-Qing
2015-01-01
Background i-Scan is a novel virtual chromoendoscopy system designed to enhance surface and vascular patterns to improve optical diagnostic performance. Numerous prospective studies have been done to evaluate the accuracy of i-Scan in differentiating colonic neoplasms from non-neoplasms. i-Scan could be an effective endoscopic technique for optical diagnosis of colonic polyps. Objective Our aim of this study was to perform a meta-analysis of published data to establish the diagnostic accuracy of i-Scan for optical diagnosis of colonic polyps. Methods We searched PubMed, Medline, Elsevier ScienceDirect and Cochrane Library databases. We used a bivariate meta-analysis following a random effects model to summarize the data and plotted hierarchical summary receiver-operating characteristic (HSROC) curves. The area under the HSROC curve (AUC) serves as an indicator of the diagnostic accuracy. Results The meta-analysis included a total of 925 patients and 2312 polyps. For the overall studies, the area under the HSROC curve was 0.96. The summary sensitivity was 90.4% (95%CI 85%-94.1%) and specificity was 90.9% (95%CI 84.3%-94.9%). In 11 studies predicting polyps histology in real-time, the summary sensitivity and specificity was 91.5% (95%CI 85.7%-95.1%) and 92.1% (95%CI 84.5%-96.1%), respectively, with the AUC of 0.97. For three different diagnostic criteria (Kudo, NICE, others), the sensitivity was 86.3%, 93.0%, 85.0%, respectively and specificity was 84.8%, 94.4%, 91.8%, respectively. Conclusions Endoscopic diagnosis with i-Scan has accurate optical diagnostic performance to differentiate neoplastic from non-neoplastic polyps with an area under the HSROC curve exceeding 0.90. Both the sensitivity and specificity for diagnosing colonic polyps are over 90%. PMID:25978459
Xi, Jinxiang; Yuan, Jiayao Eddie; Si, Xiuhua April
2016-05-01
Despite the high prevalence of rhinosinusitis, current inhalation therapy shows limited efficacy due to extremely low drug delivery efficiency to the paranasal sinuses. Novel intranasal delivery systems are needed to enhance targeted delivery to the sinus with therapeutic dosages. An optimization framework for intranasal drug delivery was developed to target polydisperse charged aerosols to the ostiomeatal complex (OMC) with electric guidance. The delivery efficiency of a group of charged aerosols recently reported in the literature was numerically assessed and optimized in an anatomically accurate nose-sinus model. Key design variables included particle charge number, particle size and distribution, electrode strength, and inhalation velocity. Both monodisperse and polydisperse aerosol profiles were considered. Results showed that the OMC delivery efficiency was highly sensitive to the applied electric field and electrostatic charges carried by the particles. Through the synthesis of electric-guidance and point drug release, focused deposition with significantly enhanced dosage in the OMC can be achieved. For 0.4 µm charged aerosols, an OMC delivery efficiency of 51.6% was predicted for monodisperse aerosols and 34.4% for polydisperse aerosols. This difference suggested that the aerosol profile exerted a notable effect on intranasal deliveries. Sensitivity analysis indicated that the OMC deposition fraction was highly sensitive to the charge and size of particles and was less sensitive to the inhalation velocity considered in this study. Experimental studies are needed to validate the numerically optimized designs. Further studies are warranted to investigate the targeted OMC delivery with both electric and acoustics controls, the latter of which has the potential to further deliver the drug particles into the sinus cavity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...
2016-10-20
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
NASA Astrophysics Data System (ADS)
Rice, A. K.; McCray, J. E.; Singha, K.
2016-12-01
The development of directional drilling and stimulation of reservoirs by hydraulic fracturing has transformed the energy landscape in the U.S. by making recovery of hydrocarbons from shale formations not only possible but economically viable. Activities associated with hydraulic fracturing present a set of water-quality challenges, including the potential for impaired groundwater quality. In this project, we use a three-dimensional, multiphase, multicomponent numerical model to investigate hydrogeologic conditions that could lead to groundwater contamination from natural gas wellbore leakage. This work explores the fate of methane that enters a well annulus, possibly from an intermediate formation or from the production zone via a flawed cement seal, and leaves the annulus at one of two depths: at the elevation of groundwater or below a freshwater aquifer. The latter leakage scenario is largely ignored in the current scientific literature, where focus has been on leakage directly into freshwater aquifers, despite modern regulations requiring steel casings and cement sheaths at these depths. We perform a three-stage sensitivity analysis, examining (1) hydrogeologic parameters of media surrounding a methane leakage source zone, (2) geostatistical variations in intrinsic permeability, and (3) methane source zone pressurization. Results indicate that in all cases methane reaches groundwater within the first year of leakage. To our knowledge, this is the first study to consider natural gas wellbore leakage in the context of multiphase flow through heterogeneous permeable media; advantages of multiphase modeling include more realistic analysis of methane vapor-phase relative permeability as compared to single-phase models. These results can be used to inform assessment of aquifer vulnerability to hydrocarbon wellbore leakage at varying depths.
FTIR gas chromatographic analysis of perfumes
NASA Astrophysics Data System (ADS)
Diederich, H.; Stout, Phillip J.; Hill, Stephen L.; Krishnan, K.
1992-03-01
Perfumes, natural or synthetic, are complex mixtures consisting of numerous components. Gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS) techniques have been extensively utilized for the analysis of perfumes and essential oils. A limited number of perfume samples have also been analyzed by FT-IR gas chromatographic (GC-FTIR) techniques. Most of the latter studies have been performed using the conventional light pipe (LP) based GC-FTIR systems. In recent years, cold-trapping (in a matrix or neat) GC-FTIR systems have become available. The cold-trapping systems are capable of sub-nanogram sensitivities. In this paper, comparison data between the LP and the neat cold-trapping GC- FTIR systems is presented. The neat cold-trapping interface is known as Tracer. The results of GC-FTIR analysis of some commercial perfumes is also presented. For comparison of LP and Tracer GC-FTIR systems, a reference (synthetic) mixture containing 16 major and numerous minor constituents was used. The components of the mixture are the compounds commonly encountered in commercial perfumes. The GC-FTIR spectra of the reference mixture was obtained under identical chromatographic conditions from an LP and a Tracer system. A comparison of the two sets of data thus generated do indeed show the enhanced sensitivity level of the Tracer system. The comparison also shows that some of the major components detected by the Tracer system were absent from the LP data. Closer examination reveals that these compounds undergo thermal decomposition on contact with the hot gold surface that is part of the LP system. GC-FTIR data were obtained for three commercial perfume samples. The major components of these samples could easily be identified by spectra search against a digitized spectral library created using the Tracer data from the reference mixture.
NASA Astrophysics Data System (ADS)
Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto
2017-04-01
We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.
Rahbari, A; Montazerian, H; Davoodi, E; Homayoonfar, S
2017-02-01
The main aim of this research is to numerically obtain the permeability coefficient in the cylindrical scaffolds. For this purpose, a mathematical analysis was performed to derive an equation for desired porosity in terms of morphological parameters. Then, the considered cylindrical geometries were modeled and the permeability coefficient was calculated according to the velocity and pressure drop values based on the Darcy's law. In order to validate the accuracy of the present numerical solution, the obtained permeability coefficient was compared with the published experimental data. It was observed that this model can predict permeability with the utmost accuracy. Then, the effect of geometrical parameters including porosity, scaffold pore structure, unit cell size, and length of the scaffolds as well as entrance mass flow rate on the permeability of porous structures was studied. Furthermore, a parametric study with scaling laws analysis of sample length and mass flow rate effects on the permeability showed good fit to the obtained data. It can be concluded that the sensitivity of permeability is more noticeable at higher porosities. The present approach can be used to characterize and optimize the scaffold microstructure due to the necessity of cell growth and transferring considerations.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Nondestructive surface analysis for material research using fiber optic vibrational spectroscopy
NASA Astrophysics Data System (ADS)
Afanasyeva, Natalia I.
2001-11-01
The advanced methods of fiber optical vibrational spectroscopy (FOVS) has been developed in conjunction with interferometer and low-loss, flexible, and nontoxic optical fibers, sensors, and probes. The combination of optical fibers and sensors with Fourier Transform (FT) spectrometer has been used in the range from 2.5 to 12micrometers . This technique serves as an ideal diagnostic tool for surface analysis of numerous and various diverse materials such as complex structured materials, fluids, coatings, implants, living cells, plants, and tissue. Such surfaces as well as living tissue or plants are very difficult to investigate in vivo by traditional FT infrared or Raman spectroscopy methods. The FOVS technique is nondestructive, noninvasive, fast (15 sec) and capable of operating in remote sampling regime (up to a fiber length of 3m). Fourier transform infrared (FTIR) and Raman fiber optic spectroscopy operating with optical fibers has been suggested as a new powerful tool. These techniques are highly sensitive techniques for structural studies in material research and various applications during process analysis to determine molecular composition, chemical bonds, and molecular conformations. These techniques could be developed as a new tool for quality control of numerous materials as well as noninvasive biopsy.
Thermal analysis of a conceptual design for a 250 We GPHS/FPSE space power system
NASA Technical Reports Server (NTRS)
Mccomas, Thomas J.; Dugan, Edward T.
1991-01-01
A thermal analysis has been performed for a 250-We space nuclear power system which combines the US Department of Energy's general purpose heat source (GPHS) modules with a state-of-the-art free-piston Stirling engine (FPSE). The focus of the analysis is on the temperature of the indium fuel clad within the GPHS modules. The thermal analysis results indicate fuel clad temperatures slightly higher than the design goal temperature of 1573 K. The results are considered favorable due to numerous conservative assumptions used. To demonstrate the effects of the conservatism, a brief sensitivity analysis is performed in which a few of the key system parameters are varied to determine their effect on the fuel clad temperatures. It is shown that thermal analysis of a more detailed thermal mode should yield fuel clad temperatures below 1573 K.
ZHAO, Bin; BASTON, David S.; KHAN, Elaine; SORRENTINO, Claudio; DENISON, Michael S.
2011-01-01
Reporter genes produce a protein product in transfected cells that can be easily measured in intact or lysed cells and they have been extensively used in numerous basic and applied research applications. Over the past 10 years, reporter gene assays have been widely accepted and used for analysis of 2,3,7,8-tetrachlorodibenzo-p-dioxin and related dioxin-like compounds in various types of matrices, such as biological, environmental, food and feed samples, given that high-resolution instrumental analysis techniques are impractical for large-scale screening analysis. The most sensitive cell-based reporter gene bioassay systems developed are the mechanism-based CALUX (Chemically Activated Luciferase Expression) and CAFLUX (Chemically Activated Fluorescent Expression) bioassays, which utilize recombinant cell lines containing stably transfected dioxin (AhR)-responsive firefly luciferase or enhanced green fluorescent protein (EGFP) reporter genes, respectively. While the current CALUX and CAFLUX bioassays are very sensitive, increasing their lower limit of sensitivity, magnitude of response and dynamic range for chemical detection would significantly increase their utility, particularly for those samples that contain low levels of dioxin-like HAHs (i.e., serum). In this study, we report that the addition of modulators of cell signaling pathways or modification of cell culture conditions results in significant improvement in the magnitude and overall responsiveness of the existing CALUX and CAFLUX cell bioassays. PMID:21394221
Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel
NASA Astrophysics Data System (ADS)
Edelmann, Paul G.
There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.
Ignition sensitivity study of an energetic train configuration using experiments and simulation
NASA Astrophysics Data System (ADS)
Kim, Bohoon; Yu, Hyeonju; Yoh, Jack J.
2018-06-01
A full scale hydrodynamic simulation intended for the accurate description of shock-induced detonation transition was conducted as a part of an ignition sensitivity analysis of an energetic component system. The system is composed of an exploding foil initiator (EFI), a donor explosive unit, a stainless steel gap, and an acceptor explosive. A series of velocity interferometer system for any reflector measurements were used to validate the hydrodynamic simulations based on the reactive flow model that describes the initiation of energetic materials arranged in a train configuration. A numerical methodology with ignition and growth mechanisms for tracking multi-material boundary interactions as well as severely transient fluid-structure coupling between high explosive charges and metal gap is described. The free surface velocity measurement is used to evaluate the sensitivity of energetic components that are subjected to strong pressure waves. Then, the full scale hydrodynamic simulation is performed on the flyer impacted initiation of an EFI driven pyrotechnical system.
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
Theoretical study of surface plasmon resonance sensors based on 2D bimetallic alloy grating
NASA Astrophysics Data System (ADS)
Dhibi, Abdelhak; Khemiri, Mehdi; Oumezzine, Mohamed
2016-11-01
A surface plasmon resonance (SPR) sensor based on 2D alloy grating with a high performance is proposed. The grating consists of homogeneous alloys of formula MxAg1-x, where M is gold, copper, platinum and palladium. Compared to the SPR sensors based a pure metal, the sensor based on angular interrogation with silver exhibits a sharper (i.e. larger depth-to-width ratio) reflectivity dip, which provides a big detection accuracy, whereas the sensor based on gold exhibits the broadest dips and the highest sensitivity. The detection accuracy of SPR sensor based a metal alloy is enhanced by the increase of silver composition. In addition, the composition of silver which is around 0.8 improves the sensitivity and the quality of SPR sensor of pure metal. Numerical simulations based on rigorous coupled wave analysis (RCWA) show that the sensor based on a metal alloy not only has a high sensitivity and a high detection accuracy, but also exhibits a good linearity and a good quality.
NASA Astrophysics Data System (ADS)
Shah, Nita H.; Shah, Arpan D.
2014-04-01
The article analyzes economic order quantity for the retailer who has to handle imperfect quality of the product and the units are subject to deteriorate at a constant rate. To control deterioration of the units in inventory, the retailer has to deploy advanced preservation technology. Another challenge for the retailer is to have perfect quality product. This requires mandatory inspection during the production process. This model is developed with the condition of random fraction of defective items. It is assumed that after inspection, the screened defective items are sold at a discounted rate instantly. Demand is considered to be price-sensitive stock-dependent. The model is incorporating effect of inflation which is critical factor globally. The objective is to maximize profit of the retailer with respect to preservation technology investment, order quantity and cycle time. The numerical example is given to validate the proposed model. Sensitivity analysis is carried out to work out managerial issues.
NASA Technical Reports Server (NTRS)
Schnepf, N. R.; Kuvshinov, A.; Sabaka, T.
2015-01-01
A few studies convincingly demonstrated that the magnetic fields induced by the lunar semidiurnal (M2) ocean flow can be identified in satellite observations. This result encourages using M2 satellite magnetic data to constrain subsurface electrical conductivity in oceanic regions. Traditional satellite-based induction studies using signals of magnetospheric origin are mostly sensitive to conducting structures because of the inductive coupling between primary and induced sources. In contrast, galvanic coupling from the oceanic tidal signal allows for studying less conductive, shallower structures. We perform global 3-D electromagnetic numerical simulations to investigate the sensitivity of M2 signals to conductivity distributions at different depths. The results of our sensitivity analysis suggest it will be promising to use M2 oceanic signals detected at satellite altitude for probing lithospheric and upper mantle conductivity. Our simulations also suggest that M2 seafloor electric and magnetic field data may provide complementary details to better constrain lithospheric conductivity.
NASA Astrophysics Data System (ADS)
Shah, Nita H.; Shah, Digeshkumar B.; Patel, Dushyantkumar G.
2015-07-01
This study aims at formulating an integrated supplier-buyer inventory model when market demand is variable price-sensitive trapezoidal and the supplier offers a choice between discount in unit price and permissible delay period for settling the accounts due against the purchases made. This type of trade credit is termed as 'net credit'. In this policy, if the buyer pays within offered time M1, then the buyer is entitled for a cash discount; otherwise the full account must be settled by the time M2; where M2 > M1 ⩾ 0. The goal is to determine the optimal selling price, procurement quantity, number of transfers from the supplier to the buyer and payment time to maximise the joint profit per unit time. An algorithm is worked out to obtain the optimal solution. A numerical example is given to validate the proposed model. The managerial insights based on sensitivity analysis are deduced.
NASA Astrophysics Data System (ADS)
Tian, Ying; Hu, Sen; Huang, Xiaojun; Yu, Zetai; Lin, Hai; Yang, Helin
2017-10-01
A low-loss and high transmission electromagnetically induced transparency like (EIT- like) structure is experimentally and numerically demonstrated in this paper. The proposed planar structure based on EIT-like metamaterial consists of two separate split-ring resonators, and its resulting transmission level can maximally reach 0.89 with significant suppression of radiation loss. According to the effective medium theory, the imaginary parts of the effective permittivity and permeability of the metamaterial are used as the evidence of low-loss. In the analysis, the simulated surface current, magnetic field distribution and coupled oscillator model reveal the principle of high transmittance EIT-effect. Furthermore, the peak of transparency frequency is highly sensitive to the variation of refractive index in the background medium. The sensor based on the proposed EIT structure can achieve a sensitivity of 1.69 GHz/RIU (refractive index unit) and a figure of merit of 11.66. Such metamaterials have potential perspectives in sensing and chiral slow light devices.
NASA Astrophysics Data System (ADS)
Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit
2018-03-01
Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
A heavy sea fog event over the Yellow Sea in March 2005: Analysis and numerical modeling
NASA Astrophysics Data System (ADS)
Gao, Shanhong; Lin, Hang; Shen, Biao; Fu, Gang
2007-02-01
In this paper, a heavy sea fog episode that occurred over the Yellow Sea on 9 March 2005 is investigated. The sea fog patch, with a spatial scale of several hundred kilometers at its mature stage, reduced visibility along the Shandong Peninsula coast to 100 m or much less at some sites. Satellite images, surface observations and soundings at islands and coasts, and analyses from the Japan Meteorology Agency (JMA) are used to describe and analyze this event. The analysis indicates that this sea fog can be categorized as advection cooling fog. The main features of this sea fog including fog area and its movement are reasonably reproduced by the Fifth-generation Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5). Model results suggest that the formation and evolution of this event can be outlined as: (1) southerly warm/moist advection of low-level air resulted in a strong sea-surface-based inversion with a thickness of about 600 m; (2) when the inversion moved from the warmer East Sea to the colder Yellow Sea, a thermal internal boundary layer (TIBL) gradually formed at the base of the inversion while the sea fog grew in response to cooling and moistening by turbulence mixing; (3) the sea fog developed as the TIBL moved northward and (4) strong northerly cold and dry wind destroyed the TIBL and dissipated the sea fog. The principal findings of this study are that sea fog forms in response to relatively persistent southerly warm/moist wind and a cold sea surface, and that turbulence mixing by wind shear is the primary mechanism for the cooling and moistening the marine layer. In addition, the study of sensitivity experiments indicates that deterministic numerical modeling offers a promising approach to the prediction of sea fog over the Yellow Sea but it may be more efficient to consider ensemble numerical modeling because of the extreme sensitivity to model input.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2013-12-01
This study presents numerical application and statistical development of Stream Water Quality Modeling (SWQM) as a tool to investigate, manage, and research the transport and fate of water pollutants in Lower Bear River, Box elder County, Utah. The concerned segment under study is the Bear River starting from Cutler Dam to its confluence with the Malad River (Subbasin HUC 16010204). Water quality problems arise primarily from high phosphorus and total suspended sediment concentrations that were caused by five permitted point source discharges and complex network of canals and ducts of varying sizes and carrying capacities that transport water (for farming and agriculture uses) from Bear River and then back to it. Utah Department of Environmental Quality (DEQ) has designated the entire reach of the Bear River between Cutler Reservoir and Great Salt Lake as impaired. Stream water quality modeling (SWQM) requires specification of an appropriate model structure and process formulation according to nature of study area and purpose of investigation. The current model is i) one dimensional (1D), ii) numerical, iii) unsteady, iv) mechanistic, v) dynamic, and vi) spatial (distributed). The basic principle during the study is using mass balance equations and numerical methods (Fickian advection-dispersion approach) for solving the related partial differential equations. Model error decreases and sensitivity increases as a model becomes more complex, as such: i) uncertainty (in parameters, data input and model structure), and ii) model complexity, will be under investigation. Watershed data (water quality parameters together with stream flow, seasonal variations, surrounding landscape, stream temperature, and points/nonpoint sources) were obtained majorly using the HydroDesktop which is a free and open source GIS enabled desktop application to find, download, visualize, and analyze time series of water and climate data registered with the CUAHSI Hydrologic Information System. Processing, assessment of validity, and distribution of time-series data was explored using the GNU R language (statistical computing and graphics environment). Physical, chemical, and biological processes equations were written in FORTRAN codes (High Performance Fortran) in order to compute and solve their hyperbolic and parabolic complexities. Post analysis of results conducted using GNU R language. High performance computing (HPC) will be introduced to expedite solving complex computational processes using parallel programming. It is expected that the model will assess nonpoint sources and specific point sources data to understand pollutants' causes, transfer, dispersion, and concentration in different locations of Bear River. Investigation the impact of reduction/removal in non-point nutrient loading to Bear River water quality management could be addressed. Keywords: computer modeling; numerical solutions; sensitivity analysis; uncertainty analysis; ecosystem processes; high Performance computing; water quality.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems.
Wolf, Elizabeth Skubak; Anderson, David F
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased for a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.
NASA Astrophysics Data System (ADS)
Putri, Y. E.; Rozi, S.; Tasman, H.; Aldila, D.
2017-03-01
A mathematical model of dengue disease transmission with involving Extrinsic Incubation Period (EIP) effect as a consequence of wolbachia introduction into mossquito population will be discussed in this article. Mathematical model analysis to find equilibrium points, basic reproductive ratio (ℛ0), and criteria of endemic occurrence which depend on some parameters were performed. From analytical result, we find that ℛ0 hold an important role to determine the existence and local stability of equilibrium points. From sensitivity analysis of ℛ0 and numerical simulation, we conclude that prolongation of EIP with wolbachia intervention succeed to reduce number of infected human and mosquito significantly.
Process modelling for Space Station experiments
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir
1990-01-01
Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.
A two-scale model for dynamic damage evolution
NASA Astrophysics Data System (ADS)
Keita, Oumar; Dascalu, Cristian; François, Bertrand
2014-03-01
This paper presents a new micro-mechanical damage model accounting for inertial effect. The two-scale damage model is fully deduced from small-scale descriptions of dynamic micro-crack propagation under tensile loading (mode I). An appropriate micro-mechanical energy analysis is combined with homogenization based on asymptotic developments in order to obtain the macroscopic evolution law for damage. Numerical simulations are presented in order to illustrate the ability of the model to describe known behaviors like size effects for the structural response, strain-rate sensitivity, brittle-ductile transition and wave dispersion.
An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging
NASA Astrophysics Data System (ADS)
Santhi, G.; Karthikeyan, K.
2017-11-01
In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.
NASA Astrophysics Data System (ADS)
Frolov, Nikita S.; Goremyko, Mikhail V.; Makarov, Vladimir V.; Maksimenko, Vladimir A.; Hramov, Alexander E.
2017-03-01
In this paper we study the conditions of chimera states excitation in ensemble of non-locally coupled Kuramoto-Sakaguchi (KS) oscillators. In the framework of current research we analyze the dynamics of the homogeneous network containing identical oscillators. We show the chimera state formation process is sensitive to the parameters of coupling kernel and to the KS network initial state. To perform the analysis we have used the Ott-Antonsen (OA) ansatz to consider the behavior of infinitely large KS network.
Active control of helicopter air resonance in hover and forward flight
NASA Technical Reports Server (NTRS)
Takahashi, M. D.; Friedman, P. P.
1988-01-01
A coupled rotor/fuselage helicopter analysis is presented. The accuracy of the model is illustrated by comparing it with experimental data. The sensitivity of the open loop damping of the unstable resonance mode to such modeling effects as blade torsional flexibility, unsteady aerodynamics, forward flight, periodic terms, and trim solution is illustrated by numerous examples. Subsequently, the model is used in conjunction with linear optimal control theory to stabilize the air resonance mode. The influence of the modeling effects mentioned before on active resonance control is then investigated.
Observer-Pattern Modeling and Slow-Scale Bifurcation Analysis of Two-Stage Boost Inverters
NASA Astrophysics Data System (ADS)
Zhang, Hao; Wan, Xiaojin; Li, Weijie; Ding, Honghui; Yi, Chuanzhi
2017-06-01
This paper deals with modeling and bifurcation analysis of two-stage Boost inverters. Since the effect of the nonlinear interactions between source-stage converter and load-stage inverter causes the “hidden” second-harmonic current at the input of the downstream H-bridge inverter, an observer-pattern modeling method is proposed by removing time variance originating from both fundamental frequency and hidden second harmonics in the derived averaged equations. Based on the proposed observer-pattern model, the underlying mechanism of slow-scale instability behavior is uncovered with the help of eigenvalue analysis method. Then eigenvalue sensitivity analysis is used to select some key system parameters of two-stage Boost inverter, and some behavior boundaries are given to provide some design-oriented information for optimizing the circuit. Finally, these theoretical results are verified by numerical simulations and circuit experiment.
Numerical modeling of the transmission dynamics of drug-sensitive and drug-resistant HSV-2
NASA Astrophysics Data System (ADS)
Gumel, A. B.
2001-03-01
A competitive finite-difference method will be constructed and used to solve a modified deterministic model for the spread of herpes simplex virus type-2 (HSV-2) within a given population. The model monitors the transmission dynamics and control of drug-sensitive and drug-resistant HSV-2. Unlike the fourth-order Runge-Kutta method (RK4), which fails when the discretization parameters exceed certain values, the novel numerical method to be developed in this paper gives convergent results for all parameter values.
Numerical parametric studies of spray combustion instability
NASA Technical Reports Server (NTRS)
Pindera, M. Z.
1993-01-01
A coupled numerical algorithm has been developed for studies of combustion instabilities in spray-driven liquid rocket engines. The model couples gas and liquid phase physics using the method of fractional steps. Also introduced is a novel, efficient methodology for accounting for spray formation through direct solution of liquid phase equations. Preliminary parametric studies show marked sensitivity of spray penetration and geometry to droplet diameter, considerations of liquid core, and acoustic interactions. Less sensitivity was shown to the combustion model type although more rigorous (multi-step) formulations may be needed for the differences to become apparent.
Detection of magnetic moment in thin films with a home-made vibrating sample magnetometer
NASA Astrophysics Data System (ADS)
Jordán, D.; González-Chávez, D.; Laura, D.; León Hilario, L. M.; Monteblanco, E.; Gutarra, A.; Avilés-Félix, L.
2018-06-01
This paper explores the optimization of an array of pick-up coils in a home-made vibrating sample magnetometer for the detection of magnetic moment in thin films. Sensitivity function of a 4-coils Mallinson configuration was numerically studied for the determination of the physical dimensions that enhance the sensitivity of the magnetometer. By performing numerical simulations using the Biot-Savart law combined with the principle of reciprocity we were able to determine the maximum values of sensitivity and the influence of the separation of the coils on the sensitivity function. After the optimization of the pick-up coils, the vibrating sample magnetometer was able to detect the magnetic moment of a 100 nm-thickness Fe19 Ni81 magnetic thin film along and perpendicular to the in-plane anisotropy easy axis. The implemented vibrating sample magnetometer is able to detect changes in the magnetic moment of ∼ 2 × 10-4 emu.
Computation of the stability derivatives via CFD and the sensitivity equations
NASA Astrophysics Data System (ADS)
Lei, Guo-Dong; Ren, Yu-Xin
2011-04-01
The method to calculate the aerodynamic stability derivates of aircrafts by using the sensitivity equations is extended to flows with shock waves in this paper. Using the newly developed second-order cell-centered finite volume scheme on the unstructured-grid, the unsteady Euler equations and sensitivity equations are solved simultaneously in a non-inertial frame of reference, so that the aerodynamic stability derivatives can be calculated for aircrafts with complex geometries. Based on the numerical results, behavior of the aerodynamic sensitivity parameters near the shock wave is discussed. Furthermore, the stability derivatives are analyzed for supersonic and hypersonic flows. The numerical results of the stability derivatives are found in good agreement with theoretical results for supersonic flows, and variations of the aerodynamic force and moment predicted by the stability derivatives are very close to those obtained by CFD simulation for both supersonic and hypersonic flows.
A model for managing sources of groundwater pollution
Gorelick, Steven M.
1982-01-01
The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.
A Comparison of Three Algorithms for Orion Drogue Parachute Release
NASA Technical Reports Server (NTRS)
Matz, Daniel A.; Braun, Robert D.
2015-01-01
The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.
On the precision of aero-thermal simulations for TMT
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos; Thompson, Hugh
2016-08-01
Environmental effects on the Image Quality (IQ) of the Thirty Meter Telescope (TMT) are estimated by aero-thermal numerical simulations. These simulations utilize Computational Fluid Dynamics (CFD) to estimate, among others, thermal (dome and mirror) seeing as well as wind jitter and blur. As the design matures, guidance obtained from these numerical experiments can influence significant cost-performance trade-offs and even component survivability. The stochastic nature of environmental conditions results in the generation of a large computational solution matrix in order to statistically predict Observatory Performance. Moreover, the relative contribution of selected key subcomponents to IQ increases the parameter space and thus computational cost, while dictating a reduced prediction error bar. The current study presents the strategy followed to minimize prediction time and computational resources, the subsequent physical and numerical limitations and finally the approach to mitigate the issues experienced. In particular, the paper describes a mesh-independence study, the effect of interpolation of CFD results on the TMT IQ metric, and an analysis of the sensitivity of IQ to certain important heat sources and geometric features.
Quantitative analysis of time-resolved microwave conductivity data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, Obadiah G.; Moore, David T.; Li, Zhen
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
Quantitative analysis of time-resolved microwave conductivity data
Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...
2017-11-10
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
Simulation and analysis of a geopotential research mission
NASA Technical Reports Server (NTRS)
Schutz, B. E.
1987-01-01
Computer simulations were performed for a Geopotential Research Mission (GRM) to enable the study of the gravitational sensitivity of the range rate measurements between the two satellites and to provide a set of simulated measurements to assist in the evaluation of techniques developed for the determination of the gravity field. The simulations were conducted with two satellites in near circular, frozen orbits at 160 km altitudes separated by 300 km. High precision numerical integration of the polar orbits were used with a gravitational field complete to degree and order 360. The set of simulated data for a mission duration of about 32 days was generated on a Cray X-MP computer. The results presented cover the most recent simulation, S8703, and includes a summary of the numerical integration of the simulated trajectories, a summary of the requirements to compute nominal reference trajectories to meet the initial orbit determination requirements for the recovery of the geopotential, an analysis of the nature of the one way integrated Doppler measurements associated with the simulation, and a discussion of the data set to be made available.
Wang, John D.; Swain, Eric D.; Wolfert, Melinda A.; Langevin, Christian D.; James, Dawn E.; Telis, Pamela A.
2007-01-01
The Comprehensive Everglades Restoration Plan requires numerical modeling to achieve a sufficient understanding of coastal freshwater flows, nutrient sources, and the evaluation of management alternatives to restore the ecosystem of southern Florida. Numerical models include a regional water-management model to represent restoration changes to the hydrology of southern Florida and a hydrodynamic model to represent the southern and western offshore waters. The coastal interface between these two systems, however, has complex surface-water/ground-water and freshwater/saltwater interactions and requires a specialized modeling effort. The Flow and Transport in a Linked Overland/Aquifer Density Dependent System (FTLOADDS) code was developed to represent connected surface- and ground-water systems with variable-density flow. The first use of FTLOADDS is the Southern Inland and Coastal Systems (SICS) application to the southeastern part of the Everglades/Florida Bay coastal region. The need to (1) expand the domain of the numerical modeling into most of Everglades National Park and the western coastal area, and (2) better represent the effect of water-delivery control structures, led to the application of the FTLOADDS code to the Tides and Inflows in the Mangroves of the Everglades (TIME) domain. This application allows the model to address a broader range of hydrologic issues and incorporate new code modifications. The surface-water hydrology is of primary interest to water managers, and is the main focus of this study. The coupling to ground water, however, was necessary to accurately represent leakage exchange between the surface water and ground water, which transfers substantial volumes of water and salt. Initial calibration and analysis of the TIME application produced simulated results that compare well statistically with field-measured values. A comparison of TIME simulation results to previous SICS results shows improved capabilities, particularly in the representation of coastal flows. This improvement most likely is due to a more stable numerical representation of the coastal creek outlets. Sensitivity analyses were performed by varying frictional resistance, leakage, barriers to flow, and topography. Changing frictional resistance values in inland areas was shown to improve water-level representation locally, but to have a negligible effect on area-wide values. These changes have only local effects and are not physically based (as are the unchanged values), and thus have limited validity. Sensitivity tests indicate that the overall accuracy of the simulation is diminished if leakage between surface water and ground water is not simulated. The inclusion of a major road as a complete barrier to surface-water flow influenced the local distribution and timing of flow; however, the changes in total flow and individual creekflows were negligible. The model land-surface altitude was lowered by 0.1 meter to determine the sensitivity to topographic variation. This topographic sensitivity test produced mixed results in matching field data. Overall, the representation of stage did not improve definitively. A final calibration utilized the results of the sensitivity analysis to refine the TIME application. To accomplish this calibration, the friction coefficient was reduced at the northern boundary inflow and increased in the southwestern corner of the model, the evapotranspiration function was varied, additional data were used for the ground-water head boundary along the southeast, and the frictional resistance of the primary coastal creek outlet was increased. The calibration improved the match between measured and simulated total flows to Florida Bay and coastal salinities. Agreement also was improved at most of the water-level sites throughout the model domain.
Infants use relative numerical group size to infer social dominance
Pun, Anthea; Birch, Susan A. J.; Baron, Andrew Scott
2016-01-01
Detecting dominance relationships, within and across species, provides a clear fitness advantage because this ability helps individuals assess their potential risk of injury before engaging in a competition. Previous research has demonstrated that 10- to 13-mo-old infants can represent the dominance relationship between two agents in terms of their physical size (larger agent = more dominant), whereas younger infants fail to do so. It is unclear whether infants younger than 10 mo fail to represent dominance relationships in general, or whether they lack sensitivity to physical size as a cue to dominance. Two studies explored whether infants, like many species across the animal kingdom, use numerical group size to assess dominance relationships and whether this capacity emerges before their sensitivity to physical size. A third study ruled out an alternative explanation for our findings. Across these studies, we report that infants 6–12 mo of age use numerical group size to infer dominance relationships. Specifically, preverbal infants expect an agent from a numerically larger group to win in a right-of-way competition against an agent from a numerically smaller group. In addition, this is, to our knowledge, the first study to demonstrate that infants 6–9 mo of age are capable of understanding social dominance relations. These results demonstrate that infants’ understanding of social dominance relations may be based on evolutionarily relevant cues and reveal infants’ early sensitivity to an important adaptive function of social groups. PMID:26884199
NASA Astrophysics Data System (ADS)
West, Loyd Travis
Site characterization is an essential aspect of hazard analysis and the time-averaged shear-wave velocity to 30 m depth "Vs30" for site-class has become a critical parameter in site-specific and probabilistic hazard analysis. Yet, the general applicability of Vs30 can be ambiguous and much debate and research surround its application. In 2007, in part to mitigate the uncertainty associated with the use of Vs30 in Las Vegas Valley, the Clark County Building Department (CCBD) in collaboration with the Nevada System of Higher Education (NSHE) embarked on an endeavor to map Vs30 using a geophysical methods approach for a site-class microzonation map of over 500 square miles (1500 km2) in southern Nevada. The resulting dataset, described by Pancha et al. (2017), contains over 10,700 1D shear-wave-velocity-depth profiles (SWVP) that constitute a rich database of 3D shear-wave velocity structure that is both laterally and vertical heterogenous. This study capitalizes on the uniquely detailed and spatially dense CCBD database to carry out sensitivity tests on the detailed shear-wave-velocity-profiles and the Vs30 utilizing 1D and 3D site-response approaches. Sensitivity tests are derived from the 1D oscillator response of a single-degree-of-freedom-oscillator and from 3D finite-difference deterministic simulations up to 15 Hz frequency using similar model parameters. Results demonstrate that the detailed SWVP are amplifying ground motions by roughly 50% over the simple Vs30 models, above 4.6 Hz frequency. Numerical simulations also depict significant lateral resonance, focusing, and scattering from seismic energy attributed to the 3D small-scale heterogeneities of the shear-wave-velocity profiles that result in a 70% increase in peak ground velocity. Additionally, PGV ratio maps clearly establish that the increased amplification from the detailed SWVPs is consistent throughout the model space. As a corollary, this study demonstrates the use of finite-differencing numerical based methods to simulate ground motions at high frequencies, up to 15 Hz.
A Scalable Nonuniform Pointer Analysis for Embedded Program
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.
Mirzazadeh, Ali; Mansournia, Mohammad-Ali; Nedjat, Saharnaz; Navadeh, Soodabeh; McFarland, Willi; Haghdoost, Ali Akbar; Mohammad, Kazem
2013-10-01
We present probabilistic and Bayesian techniques to correct for bias in categorical and numerical measures and empirically apply them to a recent survey of female sex workers (FSW) conducted in Iran. We used bias parameters from a previous validation study to correct estimates of behaviours reported by FSW. Monte-Carlo Sensitivity Analysis and Bayesian bias analysis produced point and simulation intervals (SI). The apparent and corrected prevalence differed by a minimum of 1% for the number of 'non-condom use sexual acts' (36.8% vs 35.8%) to a maximum of 33% for 'ever associated with a venue to sell sex' (35.5% vs 68.0%). The negative predictive value of the questionnaire for 'history of STI' and 'ever associated with a venue to sell sex' was 36.3% (95% SI 4.2% to 69.1%) and 46.9% (95% SI 6.3% to 79.1%), respectively. Bias-adjusted numerical measures of behaviours increased by 0.1 year for 'age at first sex act for money' to 1.5 for 'number of sexual contacts in last 7 days'. The 'true' estimates of most behaviours are considerably higher than those reported and the related SIs are wider than conventional CIs. Our analysis indicates the need for and applicability of bias analysis in surveys, particularly in stigmatised settings.
Nanosatellite constellation deployment using on-board magnetic torquer interaction with space plasma
NASA Astrophysics Data System (ADS)
Park, Ji Hyun; Matsuzawa, Shinji; Inamori, Takaya; Jeung, In-Seuck
2018-04-01
One of the advantages that drive nanosatellite development is the potential of multi-point observation through constellation operation. However, constellation deployment of nanosatellites has been a challenge, as thruster operations for orbit maneuver were limited due to mass, volume, and power. Recently, a de-orbiting mechanism using magnetic torquer interaction with space plasma has been introduced, so-called plasma drag. As no additional hardware nor propellant is required, plasma drag has the potential in being used as constellation deployment method. In this research, a novel constellation deployment method using plasma drag is proposed. Orbit decay rate of the satellites in a constellation is controlled using plasma drag in order to achieve a desired phase angle and phase angle rate. A simplified 1D problem is formulated for an elementary analysis of the constellation deployment time. Numerical simulations are further performed for analytical analysis assessment and sensitivity analysis. Analytical analysis and numerical simulation results both agree that the constellation deployment time is proportional to the inverse square root of magnetic moment, the square root of desired phase angle and the square root of satellite mass. CubeSats ranging from 1 to 3 U (1-3 kg nanosatellites) are examined in order to investigate the feasibility of plasma drag constellation on nanosatellite systems. The feasibility analysis results show that plasma drag constellation is feasible on CubeSats, which open up the possibility of CubeSat constellation missions.
Kheyfets, Vitaly O; Kieweg, Sarah L
2013-06-01
HIV/AIDS is a growing global pandemic. A microbicide is a formulation of a pharmaceutical agent suspended in a delivery vehicle, and can be used by women to protect themselves against HIV infection during intercourse. We have developed a three-dimensional (3D) computational model of a shear-thinning power-law fluid spreading under the influence of gravity to represent the distribution of a microbicide gel over the vaginal epithelium. This model, accompanied by a new experimental methodology, is a step in developing a tool for optimizing a delivery vehicle's structure/function relationship for clinical application. We compare our model with experiments in order to identify critical considerations for simulating 3D free-surface flows of shear-thinning fluids. Here we found that neglecting lateral spreading, when modeling gravity-induced flow, resulted in up to 47% overestimation of the experimental axial spreading after 90 s. In contrast, the inclusion of lateral spreading in 3D computational models resulted in rms errors in axial spreading under 7%. In addition, the choice of the initial condition for shape in the numerical simulation influences the model's ability to describe early time spreading behavior. Finally, we present a parametric study and sensitivity analysis of the power-law parameters' influence on axial spreading, and to examine the impact of changing rheological properties as a result of dilution or formulation conditions. Both the shear-thinning index (n) and consistency (m) impacted the spreading length and deceleration of the moving front. The sensitivity analysis showed that gels with midrange m and n values (for the ranges in this study) would be most sensitive (over 8% changes in spreading length) to 10% changes (e.g., from dilution) in both rheological properties. This work is applicable to many industrial and geophysical thin-film flow applications of non-Newtonian fluids; in addition to biological applications in microbicide drug delivery.
Kheyfets, Vitaly O.; Kieweg, Sarah L.
2013-01-01
HIV/AIDS is a growing global pandemic. A microbicide is a formulation of a pharmaceutical agent suspended in a delivery vehicle, and can be used by women to protect themselves against HIV infection during intercourse. We have developed a three-dimensional (3D) computational model of a shear-thinning power-law fluid spreading under the influence of gravity to represent the distribution of a microbicide gel over the vaginal epithelium. This model, accompanied by a new experimental methodology, is a step in developing a tool for optimizing a delivery vehicle's structure/function relationship for clinical application. We compare our model with experiments in order to identify critical considerations for simulating 3D free-surface flows of shear-thinning fluids. Here we found that neglecting lateral spreading, when modeling gravity-induced flow, resulted in up to 47% overestimation of the experimental axial spreading after 90 s. In contrast, the inclusion of lateral spreading in 3D computational models resulted in rms errors in axial spreading under 7%. In addition, the choice of the initial condition for shape in the numerical simulation influences the model's ability to describe early time spreading behavior. Finally, we present a parametric study and sensitivity analysis of the power-law parameters' influence on axial spreading, and to examine the impact of changing rheological properties as a result of dilution or formulation conditions. Both the shear-thinning index (n) and consistency (m) impacted the spreading length and deceleration of the moving front. The sensitivity analysis showed that gels with midrange m and n values (for the ranges in this study) would be most sensitive (over 8% changes in spreading length) to 10% changes (e.g., from dilution) in both rheological properties. This work is applicable to many industrial and geophysical thin-film flow applications of non-Newtonian fluids; in addition to biological applications in microbicide drug delivery. PMID:23699721
Inducer analysis/pump model development
NASA Astrophysics Data System (ADS)
Cheng, Gary C.
1994-03-01
Current design of high performance turbopumps for rocket engines requires effective and robust analytical tools to provide design information in a productive manner. The main goal of this study was to develop a robust and effective computational fluid dynamics (CFD) pump model for general turbopump design and analysis applications. A finite difference Navier-Stokes flow solver, FDNS, which includes an extended k-epsilon turbulence model and appropriate moving zonal interface boundary conditions, was developed to analyze turbulent flows in turbomachinery devices. In the present study, three key components of the turbopump, the inducer, impeller, and diffuser, were investigated by the proposed pump model, and the numerical results were benchmarked by the experimental data provided by Rocketdyne. For the numerical calculation of inducer flows with tip clearance, the turbulence model and grid spacing are very important. Meanwhile, the development of the cross-stream secondary flow, generated by curved blade passage and the flow through tip leakage, has a strong effect on the inducer flow. Hence, the prediction of the inducer performance critically depends on whether the numerical scheme of the pump model can simulate the secondary flow pattern accurately or not. The impeller and diffuser, however, are dominated by pressure-driven flows such that the effects of turbulence model and grid spacing (except near leading and trailing edges of blades) are less sensitive. The present CFD pump model has been proved to be an efficient and robust analytical tool for pump design due to its very compact numerical structure (requiring small memory), fast turnaround computing time, and versatility for different geometries.
NASA Astrophysics Data System (ADS)
Sanaga, S.; Vijay, S.; Kbvn, P.; Peddinti, S. R.; P S L, S.
2017-12-01
Fractured geologic media poses formidable challenges to hydrogeologists due of the strenuous mapping of fracture-matrix system and quantification of flow and transport processes. In this research, we demonstrated the efficacy of tracer-ERT studies coupled with numerical simulations to delineate preferential flow paths in a fractured granite aquifer of Deccan traps in India. A series of natural gradient saline tracer experiments were conducted from a depth window of 18 to 22 m in an injection well located inside the IIT Hyderabad campus. Tracer migration was monitored in a time-lapse mode using two cross-sectional surface ERT profiles placed in the direction of flow gradient. Dynamic changes in sub-surface electrical properties inferred via resistivity anomalies were used to highlight preferential flow paths of the study area. ERT-derived tracer breakthrough curves were in agreement with geochemical sample measurements (R2=0.74). Fracture geometry and hydraulic properties derived from ERT and pumping tests were then used to evaluate two mathematical conceptualizations that are relevant to fractured aquifers. Results of numerical analysis conclude that a dual continuum model that combines matrix and fracture systems through a flow exchange term has outperformed equivalent continuum model in reproducing tracer concentrations at the monitoring wells (evident by decrease in RMSE from 199 mg/l to 65 mg/l). A sensitivity analysis of the model parameters reveals that spatial variability in hydraulic conductivity, local-scale dispersion, and flow exchange at fracture-matrix interface have a profound effect on model simulations. Keywords: saline tracer, ERT, fractured granite, groundwater, preferential flow, numerical simulation
Inducer analysis/pump model development
NASA Technical Reports Server (NTRS)
Cheng, Gary C.
1994-01-01
Current design of high performance turbopumps for rocket engines requires effective and robust analytical tools to provide design information in a productive manner. The main goal of this study was to develop a robust and effective computational fluid dynamics (CFD) pump model for general turbopump design and analysis applications. A finite difference Navier-Stokes flow solver, FDNS, which includes an extended k-epsilon turbulence model and appropriate moving zonal interface boundary conditions, was developed to analyze turbulent flows in turbomachinery devices. In the present study, three key components of the turbopump, the inducer, impeller, and diffuser, were investigated by the proposed pump model, and the numerical results were benchmarked by the experimental data provided by Rocketdyne. For the numerical calculation of inducer flows with tip clearance, the turbulence model and grid spacing are very important. Meanwhile, the development of the cross-stream secondary flow, generated by curved blade passage and the flow through tip leakage, has a strong effect on the inducer flow. Hence, the prediction of the inducer performance critically depends on whether the numerical scheme of the pump model can simulate the secondary flow pattern accurately or not. The impeller and diffuser, however, are dominated by pressure-driven flows such that the effects of turbulence model and grid spacing (except near leading and trailing edges of blades) are less sensitive. The present CFD pump model has been proved to be an efficient and robust analytical tool for pump design due to its very compact numerical structure (requiring small memory), fast turnaround computing time, and versatility for different geometries.
Powell, Brian S; Kerry, Colette G; Cornuelle, Bruce D
2013-10-01
Measurements of acoustic ray travel-times in the ocean provide synoptic integrals of the ocean state between source and receiver. It is known that the ray travel-time is sensitive to variations in the ocean at the transmission time, but the sensitivity of the travel-time to spatial variations in the ocean prior to the acoustic transmission have not been quantified. This study examines the sensitivity of ray travel-time to the temporally and spatially evolving ocean state in the Philippine Sea using the adjoint of a numerical model. A one year series of five day backward integrations of the adjoint model quantify the sensitivity of travel-times to varying dynamics that can alter the travel-time of a 611 km ray by 200 ms. The early evolution of the sensitivities reveals high-mode internal waves that dissipate quickly, leaving the lowest three modes, providing a connection to variations in the internal tide generation prior to the sample time. They are also strongly sensitive to advective effects that alter density along the ray path. These sensitivities reveal how travel-time measurements are affected by both nearby and distant waters. Temporal nonlinearity of the sensitivities suggests that prior knowledge of the ocean state is necessary to exploit the travel-time observations.
NASA Astrophysics Data System (ADS)
Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.
2016-03-01
Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.
A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves
NASA Astrophysics Data System (ADS)
Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.
2012-04-01
The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.
Navier-Stokes Analysis of the Flowfield Characteristics of an Ice Contaminated Aircraft Wing
NASA Technical Reports Server (NTRS)
Chung, J.; Choo, Y.; Reehorst, A.; Potapczuk, M.; Slater, J.
1999-01-01
An analytical study was performed as part of the NASA Lewis support of a National Transportation Safety Board (NTSB) aircraft accident investigation. The study was focused on the performance degradation associated with ice contamination on the wing of a commercial turbo-prop-powered aircraft. Based upon the results of an earlier numerical study conducted by the authors, a prominent ridged-ice formation on the subject aircraft wing was selected for detailed flow analysis using 2-dimensional (2-D), as well as, 3-dimensional (3-D) Navier-Stokes computations. This configuration was selected because it caused the largest lift decrease and drag increase among all the ice shapes investigated in the earlier study. A grid sensitivity test was performed to find out the influence of grid spacing on the lift, drag, and associated angle-of-attack for the maximum lift (C(sub lmax)). This study showed that grid resolution is important and a sensitivity analysis is an essential element of the process in order to assure that the final solution is independent of the grid. The 2-D results suggested that a severe stability and control difficulty could have occurred at a slightly higher angle-of-attack (AOA) than the one recorded by the Flight Data Recorder (FDR). This stability and control problem was thought to have resulted from a decreased differential lift on the wings with respect to the normal loading for the configuration. The analysis also indicated that this stability and control problem could have occurred whether or not natural ice shedding took place. Numerical results using an assumed 3-D ice shape showed an increase of the angle at which this phenomena occurred of about 4 degrees. As it occurred with the 2-D case, the trailing edge separation was observed but started only when the AOA was very close to the angle at which the maximum lift occurred.
Mathematical model for transmission of tuberculosis in badger population with vaccination
NASA Astrophysics Data System (ADS)
Tasmi, Aldila, D.; Soewono, E.; Nuraini, N.
2016-04-01
Badger was first time identified as a carrier of Bovine tuberculosis disease in England since 30 years ago. Bovine tuberculosis can be transmitted to another species through the faces, saliva, and breath. The control of tuberculosis in the badger is necessary to reduce the spread of the disease to other species. Many actions have been taken by the government to tackle the disease such as culling badgers with cyanide gas, but this way destroys the natural balance and disrupts the badger population. An alternative way to eliminate tuberculosis within badger population is by vaccination. Here in this paper a model for transmission of badger tuberculosis with vaccination is discussed. The existence of the endemic equilibrium, the stability and the basic reproduction ratio are shown analytically. Numerical simulations show that with proper vaccination level, the basic reproduction ratio could be reduced significantly. Sensitivity analysis for variation of parameters are shown numerically.
NASA Astrophysics Data System (ADS)
de la Cruz, Javier; Cano, Ulises; Romero, Tatiana
2016-10-01
A critical parameter for PEM fuel cell's electric contact is the nominal clamping pressure. Predicting the mechanical behavior of all components in a fuel cell stack is a very complex task due to the diversity of materials properties. Prior to the integration of a 3 kW PEMFC power plant, a numerical simulation was performed in order to obtain the mechanical stress distribution for two of the most pressure sensitive components of the stack: the membrane, and the graphite plates. The stress distribution of the above mentioned components was numerically simulated by finite element analysis and the stress magnitude for the membrane was confirmed using pressure films. Stress values were found within the elastic zone which guarantees mechanical integrity of fuel cell components. These low stress levels particularly for the membrane will allow prolonging the life and integrity of the fuel cell stack according to its design specifications.
Anomalous-hydrodynamic analysis of charge-dependent elliptic flow in heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hongo, Masaru; Hirono, Yuji; Hirano, Tetsufumi
Anomalous hydrodynamics is a low-energy effective theory that captures effects of quantum anomalies. We develop a numerical code of anomalous hydrodynamics and apply it to dynamics of heavy-ion collisions, where anomalous transports are expected to occur. This is the first attempt to perform fully non-linear numerical simulations of anomalous hydrodynamics. We discuss implications of the simulations for possible experimental observations of anomalous transport effects. From analyses of the charge-dependent elliptic flow parameters (vmore » $$±\\atop{2}$$) as a function of the net charge asymmetry A ±, we find that the linear dependence of Δv$$±\\atop{2}$$ ≡ v$$-\\atop{2}$$ - v$$+\\atop{2}$$ on the net charge asymmetry A ± cannot be regarded as a robust signal of anomalous transports, contrary to previous studies. We, however, find that the intercept Δv$$±\\atop{2}$$ (A ± = 0) is sensitive to anomalous transport effects.« less
Anomalous-hydrodynamic analysis of charge-dependent elliptic flow in heavy-ion collisions
Hongo, Masaru; Hirono, Yuji; Hirano, Tetsufumi
2017-12-10
Anomalous hydrodynamics is a low-energy effective theory that captures effects of quantum anomalies. We develop a numerical code of anomalous hydrodynamics and apply it to dynamics of heavy-ion collisions, where anomalous transports are expected to occur. This is the first attempt to perform fully non-linear numerical simulations of anomalous hydrodynamics. We discuss implications of the simulations for possible experimental observations of anomalous transport effects. From analyses of the charge-dependent elliptic flow parameters (vmore » $$±\\atop{2}$$) as a function of the net charge asymmetry A ±, we find that the linear dependence of Δv$$±\\atop{2}$$ ≡ v$$-\\atop{2}$$ - v$$+\\atop{2}$$ on the net charge asymmetry A ± cannot be regarded as a robust signal of anomalous transports, contrary to previous studies. We, however, find that the intercept Δv$$±\\atop{2}$$ (A ± = 0) is sensitive to anomalous transport effects.« less
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
Molecular diagnostics for the detection and characterization of microbial pathogens.
Procop, Gary W
2007-09-01
New and advanced methods of molecular diagnostics are changing the way we practice clinical microbiology, which affects the practice of medicine. Signal amplification and real-time nucleic acid amplification technologies offer a sensitive and specific result with a more rapid turnaround time than has ever before been possible. Numerous methods of postamplification analysis afford the simultaneous detection and differentiation of numerous microbial pathogens, their mechanisms of resistance, and the construction of disease-specific assays. The technical feasibility of these assays has already been demonstrated. How these new, often more expensive tests will be incorporated into routine practice and the impact they will have on patient care remain to be determined. One of the most attractive uses for such techniques is to achieve a more rapid characterization of the infectious agent so that a narrower-spectrum antimicrobial agent may be used, which should have an impact on resistance patterns.
NASA Astrophysics Data System (ADS)
Li, Hong; Peng, Wei; Wang, Yanjie; Hu, Lingling; Liang, Yuzhang; Zhang, Xinpu; Yao, Wenjuan; Yu, Qi; Zhou, Xinlei
2011-12-01
Optical sensors based on nanoparticles induced Localized Surface Plasmon Resonance are more sensitive to real-time chemical and biological sensing, which have attracted intensive attentions in many fields. In this paper, we establish a simulation model based on nanoparticles imprinted polymer to increase sensitivity of the LSPR sensor by detecting the changes of Surface Plasmon Resonance signals. Theoretical analysis and numerical simulation of parameters effects to absorption peak and light field distribution are highlighted. Two-dimensional simulated color maps show that LSPR lead to centralization of the light energy around the gold nanoparticles, Transverse Magnetic wave and total reflection become the important factors to enhance the light field in our simulated structure. Fast Fourier Transfer analysis shows that the absorption peak of the surface plasmon resonance signal resulted from gold nanoparticles is sharper while its wavelength is bigger by comparing with silver nanoparticles; a double chain structure make the amplitude of the signals smaller, and make absorption wavelength longer; the absorption peak of enhancement resulted from nanopore arrays has smaller wavelength and weaker amplitude in contrast with nanoparticles. These simulation results of the Localized Surface Plasmon Resonance can be used as an enhanced transduction mechanism for enhancement of sensitivity in recognition and sensing of target analytes in accordance with different requirements.
Thiruppathiraja, Chinnasamy; Kamatchiammal, Senthilkumar; Adaikkappan, Periyakaruppan; Santhosh, Devakirubakaran Jayakar; Alagar, Muthukaruppan
2011-10-01
The present study was aimed at the development and evaluation of a DNA electrochemical biosensor for Mycobacterium sp. genomic DNA detection in a clinical specimen using a signal amplifier as dual-labeled AuNPs. The DNA electrochemical biosensors were fabricated using a sandwich detection strategy involving two kinds of DNA probes specific to Mycobacterium sp. genomic DNA. The probes of enzyme ALP and the detector probe both conjugated on the AuNPs and subsequently hybridized with target DNA immobilized in a SAM/ITO electrode followed by characterization with CV, EIS, and DPV analysis using the electroactive species para-nitrophenol generated by ALP through hydrolysis of para-nitrophenol phosphate. The effect of enhanced sensitivity was obtained due to the AuNPs carrying numerous ALPs per hybridization and a detection limit of 1.25 ng/ml genomic DNA was determined under optimized conditions. The dual-labeled AuNP-facilitated electrochemical sensor was also evaluated by clinical sputum samples, showing a higher sensitivity and specificity and the outcome was in agreement with the PCR analysis. In conclusion, the developed electrochemical sensor demonstrated unique sensitivity and specificity for both genomic DNA and sputum samples and can be employed as a regular diagnostics tool for Mycobacterium sp. monitoring in clinical samples. Copyright © 2011 Elsevier Inc. All rights reserved.
Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe
2015-01-01
Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791
NASA Astrophysics Data System (ADS)
König, Diethard; Mahmoudi, Elham; Khaledi, Kavan; von Blumenthal, Achim; Schanz, Tom
2016-04-01
The excess electricity produced by renewable energy sources available during off-peak periods of consumption can be used e.g. to produce and compress hydrogen or to compress air. Afterwards the pressurized gas is stored in the rock salt cavities. During this process, thermo-mechanical cyclic loading is applied to the rock salt surrounding the cavern. Compared to the operation of conventional storage caverns in rock salt the frequencies of filling and discharging cycles and therefore the thermo-mechanical loading cycles are much higher, e.g. daily or weekly compared to seasonally or yearly. The stress strain behavior of rock salt as well as the deformation behavior and the stability of caverns in rock salt under such loading conditions are unknown. To overcome this, existing experimental studies have to be supplemented by exploring the behavior of rock salt under combined thermo-mechanical cyclic loading. Existing constitutive relations have to be extended to cover degradation of rock salt under thermo-mechanical cyclic loading. At least the complex system of a cavern in rock salt under these loading conditions has to be analyzed by numerical modeling taking into account the uncertainties due to limited access in large depth to investigate material composition and properties. An interactive evolution concept is presented to link the different components of such a study - experimental modeling, constitutive modeling and numerical modeling. A triaxial experimental setup is designed to characterize the cyclic thermo-mechanical behavior of rock salt. The imposed boundary conditions in the experimental setup are assumed to be similar to the stress state obtained from a full-scale numerical simulation. The computational model relies primarily on the governing constitutive model for predicting the behavior of rock salt cavity. Hence, a sophisticated elasto-viscoplastic creep constitutive model is developed to take into account the dilatancy and damage progress, as well as the temperature effects. The contributed input parameters in the constitutive model are calibrated using the experimental measurements. In the following, the initial numerical simulation is modified based on the introduced constitutive model implemented in a finite element code. However, because of the significant levels of uncertainties involved in the design procedure of such structures, a reliable design can be achieved by employing probabilistic approaches. Therefore, the numerical calculation is extended by statistical tools such as sensitivity analysis, probabilistic analysis and robust reliability-based design. Uncertainties e.g. due to limited site investigation, which is always fragmentary within these depths, can be compensated by using data sets of field measurements for back calculation of input parameters with the developed numerical model. Monitoring concepts can be optimized by identifying sensor localizations e.g. using sensitivity analyses.
Sample preparation: a critical step in the analysis of cholesterol oxidation products.
Georgiou, Christiana A; Constantinou, Michalis S; Kapnissi-Christodoulou, Constantina P
2014-02-15
In recent years, cholesterol oxidation products (COPs) have drawn scientific interest, particularly due to their implications on human health. A big number of these compounds have been demonstrated to be cytotoxic, mutagenic, and carcinogenic. The main source of COPs is through diet, and particularly from the consumption of cholesterol-rich foods. This raises questions about the safety of consumers, and it suggests the necessity for the development of a sensitive and a reliable analytical method in order to identify and quantify these components in food samples. Sample preparation is a necessary step in the analysis of COPs in order to eliminate interferences and increase sensitivity. Numerous publications have, over the years, reported the use of different methods for the extraction and purification of COPs. However, no method has, so far, been established as a routine method for the analysis of COPs in foods. Therefore, it was considered important to overview different sample preparation procedures and evaluate the different preparative parameters, such as time of saponification, the type of organic solvents for fat extraction, the stationary phase in solid phase extraction, etc., according to recovery, precision and simplicity. Copyright © 2013 Elsevier Ltd. All rights reserved.
Development of a new semi-analytical model for cross-borehole flow experiments in fractured media
Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.
2015-01-01
Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.
Measurement Consistency from Magnetic Resonance Images
Chung, Dongjun; Chung, Moo K.; Durtschi, Reid B.; Lindell, R. Gentry; Vorperian, Houri K.
2010-01-01
Rationale and Objectives In quantifying medical images, length-based measurements are still obtained manually. Due to possible human error, a measurement protocol is required to guarantee the consistency of measurements. In this paper, we review various statistical techniques that can be used in determining measurement consistency. The focus is on detecting a possible measurement bias and determining the robustness of the procedures to outliers. Materials and Methods We review correlation analysis, linear regression, Bland-Altman method, paired t-test, and analysis of variance (ANOVA). These techniques were applied to measurements, obtained by two raters, of head and neck structures from magnetic resonance images (MRI). Results The correlation analysis and the linear regression were shown to be insufficient for detecting measurement inconsistency. They are also very sensitive to outliers. The widely used Bland-Altman method is a visualization technique so it lacks the numerical quantification. The paired t-test tends to be sensitive to small measurement bias. On the other hand, ANOVA performs well even under small measurement bias. Conclusion In almost all cases, using only one method is insufficient and it is recommended to use several methods simultaneously. In general, ANOVA performs the best. PMID:18790405
Complex blood flow patterns in an idealized left ventricle: A numerical study
NASA Astrophysics Data System (ADS)
Tagliabue, Anna; Dedè, Luca; Quarteroni, Alfio
2017-09-01
In this paper, we study the blood flow dynamics in a three-dimensional (3D) idealized left ventricle of the human heart whose deformation is driven by muscle contraction and relaxation in coordination with the action of the mitral and aortic valves. We propose a simplified but realistic mathematical treatment of the valves function based on mixed time-varying boundary conditions (BCs) for the Navier-Stokes equations modeling the flow. These switchings in time BCs, from natural to essential and vice versa, model either the open or the closed configurations of the valves. At the numerical level, these BCs are enforced by means of the extended Nitsche's method (Tagliabue et al., Int. J. Numer. Methods Fluids, 2017). Numerical results for the 3D idealized left ventricle obtained by means of Isogeometric Analysis are presented, discussed in terms of both instantaneous and phase-averaged quantities of interest and validated against those available in the literature, both experimental and computational. The complex blood flow patterns are analysed to describe the characteristic fluid properties, to show the transitional nature of the flow, and to highlight its main features inside the left ventricle. The sensitivity of the intraventricular flow patterns to the mitral valve properties is also investigated.
The precipitation forecast sensitivity to data assimilation on a very high resolution domain
NASA Astrophysics Data System (ADS)
Palamarchuk, Iuliia; Ivanov, Sergiy; Ruban, Igor
2016-04-01
Last developments in computing technologies allow the implementation of a very high resolution in numerical weather prediction models. Due to that fact, simulation and quantitative analysis of mesoscale processes with a horizontal scale of few kilometers become available. This is crucially important in studies of precipitation including their life-cycle. However, new opportunities generate prerequisites to revising existing knowledge, both in meteorology and numerics. The latter associates, in particular, with formulation of the initial conditions involving the data assimilation. Depending on applied techniques, observational data types and spatial resolution the precipitation prediction appears quite sensitive. The impact of the data assimilation on resulting fields is presented using the Harmonie-38h1.2 model with the AROME physical package. The numerical experiments were performed for the Finland domain with the horizontal grid of 2.5 km and 65 vertical levels for the August 2010 period covering the BaltRad experiment. The initial conditions formulation included downscaling from the MARS archive and involving observations through 3DVAR data assimilation. The treatment of both conventional and radar observations in numerical experiments was used. The earlier included the SYNOP, SHIP, PILOT, TEMP, AIREP and DRIBU types. The background error covariances required for the variational assimilation have already been computed from the ensemble perturbed analysis with the purely statistical balance by the HIRLAM community. Deviations among the model runs started from the MARS, conventional and radar data assimilation were complex. In the focus therefore is to know how the model system reacts on involvement of observations. The contribution from observed variables included in the control vector, such as humidity and temperature, was expected to be largest. Nevertheless, revealing of such impact is not so straightforward task. Major changes occur within the lower 3-km layer of the atmosphere for all predicted variables. However, those changes were not directly associated with observation locations, as it often shows single observation experiments. Moreover, the model response to observations with lead time produces weak mesoscale spots of opposite signs. Special attention is paid to precipitation, cloud and rain water, vertical velocity fields. A complex chain of interactions among radiation, temperature, humidity, stratification and other atmospheric characteristics results in changes of local updraft and downdraft flows and following cloud formation processes and precipitation release. One can assume that those features would arise due to both, atmospheric physics and numeric effects. The latter becomes more evident in simulations on very high resolution domains.
Lens-free microscopy of cerebrospinal fluid for the laboratory diagnosis of meningitis
NASA Astrophysics Data System (ADS)
Delacroix, Robin; Morel, Sophie Nhu An; Hervé, Lionel; Bordy, Thomas; Blandin, Pierre; Dinten, Jean-Marc; Drancourt, Michel; Allier, Cédric
2018-02-01
The cytology of the cerebrospinal fluid is traditionally performed by an operator (physician, biologist) by means of a conventional light microscope. The operator visually counts the leukocytes (white blood cells) present in a sample of cerebrospinal fluid (10 μl). It is a tedious job and the result is operator-dependent. Here in order to circumvent the limitations of manual counting, we approach the question of numeration of erythrocytes and leukocytes for the cytological diagnosis of meningitis by means of lens-free microscopy. In a first step, a prospective counts of leukocytes was performed by five different operators using conventional optical microscopy. The visual counting yielded an overall 16.7% misclassification of 72 cerebrospinal fluid specimens in meningitis/non-meningitis categories using a 10 leukocyte/μL cut-off. In a second step, the lens-free microscopy algorithm was adapted step-by-step for counting cerebrospinal fluid cells and discriminating leukocytes from erythrocytes. The optimization of the automatic lens-free counting was based on the prospective analysis of 215 cerebrospinal fluid specimens. The optimized algorithm yielded a 100% sensitivity and a 86% specificity compared to confirmed diagnostics. In a third step, a blind lens-free microscopic analysis of 116 cerebrospinal fluid specimens, including six cases of microbiology confirmed infectious meningitis, yielded a 100% sensitivity and a 79% specificity. Adapted lens-free microscopy is thus emerging as an operator-independent technique for the rapid numeration of leukocytes and erythrocytes in cerebrospinal fluid. In particular, this technique is well suited to the rapid diagnosis of meningitis at point-of-care laboratories.
NASA Astrophysics Data System (ADS)
Srivastava, S. K., Sr.; Sharma, D. A.; Sachdeva, K.
2017-12-01
Indo-Gangetic plains of India experience severe fog conditions during the peak winter months of December and January every year. In this paper an attempt has been to analyze the spatial and temporal variability of winter fog over Indo-Gangetic plains. Further, an attempt has also been made to configure an efficient meso-scale numerical weather prediction model using different parameterization schemes and develop a forecasting tool for prediction of fog during winter months over Indo-Gangetic plains. The study revealed that an alarming increasing positive trend of fog frequency prevails over many locations of IGP. Hot spot and cluster analysis were conducted to identify the high fog prone zones using GIS and inferential statistical tools respectively. Hot spots on an average experiences fog on 68.27% days, it is followed by moderate and cold spots with 48.03% and 21.79% respectively. The study proposes a new FASP (Fog Analysis, sensitivity and prediction) Model for overall analysis and prediction of fog at a particular location and period over IGP. In the first phase of this model long term climatological fog data of a location is analyzed to determine its characteristics and prevailing trend using various advanced statistical techniques. During a second phase a sensitivity test is conducted with different combination of parameterization schemes to determine the most suitable combination for fog simulation over a particular location and period and in the third and final phase, first ARIMA model is used to predict the number of fog days in future . Thereafter, Numerical model is used to predict the various meteorological parameters favourable for fog forecast. Finally, Hybrid model is used for fog forecast over the study location. The results of the FASP model are validated with actual ground based fog data using statistical tools. Forecast Fog-gram generated using hybrid model during Jan 2017 shows highly encouraging results for fog occurrence/Non occurrence between 25 hrs to 72 hours forecast. The model predicted the fog occurrences/Non occurrence with more than 85 % accuracy over most of the locations across the study area. The minimum visibility departure is within 500 m on 90% occasions over the central IGP and within 1000m on more than 80 % occasions over most of the locations across Indo-Gangetic plains.
Zhai, Min; Li, Bing; Li, Dehua
2017-09-01
Resonance frequency analysis (RFA) methods are widely used to assess implant stability, particularly the Osstell ® device. The potential effects associated with this method have been discussed in the literature. Torsional RFA (T-RFA), mentioned in our previous study, could represent a new measurement method. The purpose of this study was to simulate T-shaped and Osstell ® transducer-implant-bone system models; compare their vibration modes and corresponding resonance frequencies; and investigate the effects of their parameters, such as the effective implant length (EIL), bone quality, and osseointegration level, on the torsional resonance frequency (TRF) and bending resonance frequency (BRF) using three-dimensional finite element analysis. Following the finite element model validation, the TRFs and BRFs for three different EILs and four types of bone quality were obtained, and the change rates during 25 degrees of osseointegration were observed. The analysis showed that an increase in the EIL and a decrease in bone quality have less effect on the declination rate of TRFs than on that of BRFs. TRFs are highly sensitive to the stiffness of the implant-bone interface during the healing period. It was concluded that T-RFA has better sensitivity and specificity.
Numerosity processing in early visual cortex.
Fornaciai, Michele; Brannon, Elizabeth M; Woldorff, Marty G; Park, Joonkoo
2017-08-15
While parietal cortex is thought to be critical for representing numerical magnitudes, we recently reported an event-related potential (ERP) study demonstrating selective neural sensitivity to numerosity over midline occipital sites very early in the time course, suggesting the involvement of early visual cortex in numerosity processing. However, which specific brain area underlies such early activation is not known. Here, we tested whether numerosity-sensitive neural signatures arise specifically from the initial stages of visual cortex, aiming to localize the generator of these signals by taking advantage of the distinctive folding pattern of early occipital cortices around the calcarine sulcus, which predicts an inversion of polarity of ERPs arising from these areas when stimuli are presented in the upper versus lower visual field. Dot arrays, including 8-32dots constructed systematically across various numerical and non-numerical visual attributes, were presented randomly in either the upper or lower visual hemifields. Our results show that neural responses at about 90ms post-stimulus were robustly sensitive to numerosity. Moreover, the peculiar pattern of polarity inversion of numerosity-sensitive activity at this stage suggested its generation primarily in V2 and V3. In contrast, numerosity-sensitive ERP activity at occipito-parietal channels later in the time course (210-230ms) did not show polarity inversion, indicating a subsequent processing stage in the dorsal stream. Overall, these results demonstrate that numerosity processing begins in one of the earliest stages of the cortical visual stream. Copyright © 2017 Elsevier Inc. All rights reserved.
Sensitivity and rapidity of vegetational response to abrupt climate change
NASA Technical Reports Server (NTRS)
Peteet, D.
2000-01-01
Rapid climate change characterizes numerous terrestrial sediment records during and since the last glaciation. Vegetational response is best expressed in terrestrial records near ecotones, where sensitivity to climate change is greatest, and response times are as short as decades.
Nanoparticles doped film sensing based on terahertz metamaterials
NASA Astrophysics Data System (ADS)
Liu, Weimin; Fan, Fei; Chang, Shengjiang; Hou, Jiaqing; Chen, Meng; Wang, Xianghui; Bai, Jinjun
2017-12-01
A nanoparticles concentration sensor based on doped film and terahertz (THz) metamaterial has been proposed. By coating the nanoparticles doped polyvinyl alcohol (PVA) film on the surface of THz metamaterial, the effects of nanoparticle concentration on the metamaterial resonances are investigated through experiments and numerical simulations. Results show that resonant frequency of the metamaterial linearly decreases with the increment of doping concentration. Furthermore, numerical simulations illustrate that the redshift of resonance results from the changes of refractive index of the doped film. The concentration sensitivity of this sensor is 3.12 GHz/0.1%, and the refractive index sensitivity reaches 53.33 GHz/RIU. This work provides a non-contact, nondestructive and sensitive method for the detection of nanoparticles concentration and brings out a new application on THz film metamaterial sensing.
Tang, Songsong; Gu, Yuan; Lu, Huiting; Dong, Haifeng; Zhang, Kai; Dai, Wenhao; Meng, Xiangdan; Yang, Fan; Zhang, Xueji
2018-04-03
Herein, a highly-sensitive microRNA (miRNA) detection strategy was developed by combining bio-bar-code assay (BBA) with catalytic hairpin assembly (CHA). In the proposed system, two nanoprobes of magnetic nanoparticles functionalized with DNA probes (MNPs-DNA) and gold nanoparticles with numerous barcode DNA (AuNPs-DNA) were designed. In the presence of target miRNA, the MNP-DNA and AuNP-DNA hybridized with target miRNA to form a "sandwich" structure. After "sandwich" structures were separated from the solution by the magnetic field and dehybridized by high temperature, the barcode DNA sequences were released by dissolving AuNPs. The released barcode DNA sequences triggered the toehold strand displacement assembly of two hairpin probes, leading to recycle of barcode DNA sequences and producing numerous fluorescent CHA products for miRNA detection. Under the optimal experimental conditions, the proposed two-stage amplification system could sensitively detect target miRNA ranging from 10 pM to 10 aM with a limit of detection (LOD) down to 97.9 zM. It displayed good capability to discriminate single base and three bases mismatch due to the unique sandwich structure. Notably, it presented good feasibility for selective multiplexed detection of various combinations of synthetic miRNA sequences and miRNAs extracted from different cell lysates, which were in agreement with the traditional polymerase chain reaction analysis. The two-stage amplification strategy may be significant implication in the biological detection and clinical diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.
1983-01-01
A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.
Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
NASA Astrophysics Data System (ADS)
Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.
2018-02-01
Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.
Coupling influence on the sensitivity of microfiber resonator sensors
NASA Astrophysics Data System (ADS)
Guo, Wei; Chen, Ye; Kou, Jun-long; Xu, Fei; Lu, Yan-qing
2011-12-01
By modifying the resonant condition of microfiber resonator sensors while taking the coupling effect into account, we theoretically investigate coupling influence on the resonant wavelength and sensitivity. Numerical calculation shows significant difference in resonant wavelength and sensitivity with different coupling strength. Tuning the coupling can shift the resonant position as far as several nanometers and change the sensitivity as large as 30 nm/RIU in an all-coupling microfiber coil resonator.
X-ray fluorescence holography studies for a Cu3Au crystal
NASA Astrophysics Data System (ADS)
Dąbrowski, K. M.; Dul, D. T.; Jaworska-Gołąb, T.; Rysz, J.; Korecki, P.
2015-12-01
In this work we show that performing a numerical correction for beam attenuation and indirect excitation allows one to fully restore element sensitivity in the three-dimensional reconstruction of the atomic structure. This is exemplified by a comparison of atomic images reconstructed from holograms measured for ordered and disordered phases of a Cu3Au crystal that clearly show sensitivity to changes in occupancy of the atomic sites. Moreover, the numerical correction, which is based on quantitative methods of X-ray fluorescence spectroscopy, was extended to take into account the influence of a disturbed overlayer in the sample.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Global sensitivity analysis of water age and temperature for informing salmonid disease management
NASA Astrophysics Data System (ADS)
Javaheri, Amir; Babbar-Sebens, Meghna; Alexander, Julie; Bartholomew, Jerri; Hallett, Sascha
2018-06-01
Many rivers in the Pacific Northwest region of North America are anthropogenically manipulated via dam operations, leading to system-wide impacts on hydrodynamic conditions and aquatic communities. Understanding how dam operations alter abiotic and biotic variables is important for designing management actions. For example, in the Klamath River, dam outflows could be manipulated to alter water age and temperature to reduce risk of parasite infections in salmon by diluting or altering viability of parasite spores. However, sensitivity of water age and temperature to the riverine conditions such as bathymetry can affect outcomes from dam operations. To examine this issue in detail, we conducted a global sensitivity analysis of water age and temperature to a comprehensive set of hydraulics and meteorological parameters in the Klamath River, California, where management of salmonid disease is a high priority. We applied an analysis technique, which combined Latin-hypercube and one-at-a-time sampling methods, and included simulation runs with the hydrodynamic numerical model of the Lower Klamath. We found that flow rate and bottom roughness were the two most important parameters that influence water age. Water temperature was more sensitive to inflow temperature, air temperature, solar radiation, wind speed, flow rate, and wet bulb temperature respectively. Our results are relevant for managers because they provide a framework for predicting how water within 'high infection risk' sections of the river will respond to dam water (low infection risk) input. Moreover, these data will be useful for prioritizing the use of water age (dilution) versus temperature (spore viability) under certain contexts when considering flow manipulation as a method to reduce risk of infection and disease in Klamath River salmon.
Ford, W; King, K; Williams, M; Williams, J; Fausey, N
2015-07-01
Numerical modeling is an economical and feasible approach for quantifying the effects of best management practices on dissolved reactive phosphorus (DRP) loadings from agricultural fields. However, tools that simulate both surface and subsurface DRP pathways are limited and have not been robustly evaluated in tile-drained landscapes. The objectives of this study were to test the ability of the Agricultural Policy/Environmental eXtender (APEX), a widely used field-scale model, to simulate surface and tile P loadings over management, hydrologic, biologic, tile, and soil gradients and to better understand the behavior of P delivery at the edge-of-field in tile-drained midwestern landscapes. To do this, a global, variance-based sensitivity analysis was performed, and model outputs were compared with measured P loads obtained from 14 surface and subsurface edge-of-field sites across central and northwestern Ohio. Results of the sensitivity analysis showed that response variables for DRP were highly sensitive to coupled interactions between presumed important parameters, suggesting nonlinearity of DRP delivery at the edge-of-field. Comparison of model results to edge-of-field data showcased the ability of APEX to simulate surface and subsurface runoff and the associated DRP loading at monthly to annual timescales; however, some high DRP concentrations and fluxes were not reflected in the model, suggesting the presence of preferential flow. Results from this study provide new insights into baseline tile DRP loadings that exceed thresholds for algal proliferation. Further, negative feedbacks between surface and subsurface DRP delivery suggest caution is needed when implementing DRP-based best management practices designed for a specific flow pathway. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Elizabeth Skubak, E-mail: ewolf@saintmarys.edu; Anderson, David F., E-mail: anderson@math.wisc.edu
2015-01-21
Stochastic models are often used to help understand the behavior of intracellular biochemical processes. The most common such models are continuous time Markov chains (CTMCs). Parametric sensitivities, which are derivatives of expectations of model output quantities with respect to model parameters, are useful in this setting for a variety of applications. In this paper, we introduce a class of hybrid pathwise differentiation methods for the numerical estimation of parametric sensitivities. The new hybrid methods combine elements from the three main classes of procedures for sensitivity estimation and have a number of desirable qualities. First, the new methods are unbiased formore » a broad class of problems. Second, the methods are applicable to nearly any physically relevant biochemical CTMC model. Third, and as we demonstrate on several numerical examples, the new methods are quite efficient, particularly if one wishes to estimate the full gradient of parametric sensitivities. The methods are rather intuitive and utilize the multilevel Monte Carlo philosophy of splitting an expectation into separate parts and handling each in an efficient manner.« less
Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter
NASA Astrophysics Data System (ADS)
Guo, Xiang-Gui; Yang, Guang-Hong
2012-04-01
This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.
Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.
Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong
2018-04-01
Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
Numerical Analysis of the Trailblazer Inlet Flowfield for Hypersonic Mach Numbers
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; DeBonis, J. R.
1999-01-01
A study of the Trailblazer vehicle inlet was conducted using the Global Air Sampling Program (GASP) code for flight Mach numbers ranging from 4-12. Both perfect gas and finite rate chemical analysis were performed with the intention of making detailed comparisons between the two results. Inlet performance was assessed using total pressure recovery and kinetic energy efficiency. These assessments were based upon a one-dimensional stream-thrust-average of the axisymmetric flowfield. Flow visualization utilized to examine the detailed shock structures internal to this mixed-compression inlet. Kinetic energy efficiency appeared to be the least sensitive to differences between the perfect gas and finite rate chemistry results. Total pressure recovery appeared to be the most sensitive discriminator between the perfect gas and finite rate chemistry results for flight Mach numbers above Mach 6. Adiabatic wall temperature was consistently overpredicted by the perfect gas model for flight Mach numbers above Mach 4. The predicted shock structures were noticeably different for Mach numbers from 6-12. At Mach 4, the perfect gas and finite rate chemistry models collapse to the same result.
Building a maintenance policy through a multi-criterion decision-making model
NASA Astrophysics Data System (ADS)
Faghihinia, Elahe; Mollaverdi, Naser
2012-08-01
A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.
Analysis of airfoil transitional separation bubbles
NASA Technical Reports Server (NTRS)
Davis, R. L.; Carter, J. E.
1984-01-01
A previously developed local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation) has been modified to utilize a more accurate windward finite difference procedure in the reversed flow region, and a natural transition/turbulence model has been incorporated for the prediction of transition within the separation bubble. Numerous calculations and experimental comparisons are presented to demonstrate the effects of the windward differencing scheme and the natural transition/turbulence model. Grid sensitivity and convergence capabilities of this inviscid-viscous interaction technique are briefly addressed. A major contribution of this report is that with the use of windward differencing, a second, counter-rotating eddy has been found to exist in the wall layer of the primary separation bubble.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Analysis of sensitivity to different parameterization schemes for a subtropical cyclone
NASA Astrophysics Data System (ADS)
Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.
2018-05-01
A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.
King, Jonathan M.; Hurwitz, Shaul; Lowenstern, Jacob B.; Nordstrom, D. Kirk; McCleskey, R. Blaine
2016-01-01
A multireaction chemical equilibria geothermometry (MEG) model applicable to high-temperature geothermal systems has been developed over the past three decades. Given sufficient data, this model provides more constraint on calculated reservoir temperatures than classical chemical geothermometers that are based on either the concentration of silica (SiO2), or the ratios of cation concentrations. A set of 23 chemical analyses from Ojo Caliente Spring and 22 analyses from other thermal features in the Lower Geyser Basin of Yellowstone National Park are used to examine the sensitivity of calculated reservoir temperatures using the GeoT MEG code (Spycher et al. 2013, 2014) to quantify the effects of solute concentrations, degassing, and mineral assemblages on calculated reservoir temperatures. Results of our analysis demonstrate that the MEG model can resolve reservoir temperatures within approximately ±15°C, and that natural variation in fluid compositions represents a greater source of variance in calculated reservoir temperatures than variations caused by analytical uncertainty (assuming ~5% for major elements). The analysis also suggests that MEG calculations are particularly sensitive to variations in silica concentration, the concentrations of the redox species Fe(II) and H2S, and that the parameters defining steam separation and CO2 degassing from the liquid may be adequately determined by numerical optimization. Results from this study can provide guidance for future applications of MEG models, and thus provide more reliable information on geothermal energy resources during exploration.
Feng, Xiao; Peng, Li; Chang-Quan, Long; Yi, Lei; Hong, Li
2014-09-01
Most previous studies investigating relational reasoning have used visuo-spatial materials. This fMRI study aimed to determine how relational complexity affects brain activity during inductive reasoning, using numerical materials. Three numerical relational levels of the number series completion task were adopted for use: 0-relational (e.g., "23 23 23"), 1-relational ("32 30 28") and 2-relational ("12 13 15") problems. The fMRI results revealed that the bilateral dorsolateral prefrontal cortex (DLPFC) showed enhanced activity associated with relational complexity. Bilateral inferior parietal lobule (IPL) activity was greater during the 1- and 2-relational level problems than during the 0-relational level problems. In addition, the left fronto-polar cortex (FPC) showed selective activity during the 2-relational level problems. The bilateral DLPFC may be involved in the process of hypothesis generation, whereas the bilateral IPL may be sensitive to calculation demands. Moreover, the sensitivity of the left FPC to the multiple relational problems may be related to the integration of numerical relations. The present study extends our knowledge of the prefrontal activity pattern underlying numerical relational processing. Copyright © 2014 Elsevier B.V. All rights reserved.
Numerical Modelling and Prediction of Erosion Induced by Hydrodynamic Cavitation
NASA Astrophysics Data System (ADS)
Peters, A.; Lantermann, U.; el Moctar, O.
2015-12-01
The present work aims to predict cavitation erosion using a numerical flow solver together with a new developed erosion model. The erosion model is based on the hypothesis that collapses of single cavitation bubbles near solid boundaries form high velocity microjets, which cause sonic impacts with high pressure amplitudes damaging the surface. The erosion model uses information from a numerical Euler-Euler flow simulation to predict erosion sensitive areas and assess the erosion aggressiveness of the flow. The obtained numerical results were compared to experimental results from tests of an axisymmetric nozzle.
Soil type influences the sensitivity of nutrient dynamics to changes in atmospheric CO2
USDA-ARS?s Scientific Manuscript database
Numerous studies have indicated that increases in atmospheric CO2 have the potential to decrease nitrogen availability through the process of progressive nitrogen limitation (PNL). The timing and magnitude of PNL in field experiments is varied due to numerous ecosystem processes. Here we examined ...
Soil type influences the sensitivity of nutrient dynamics to changes in atmospheric CO2
USDA-ARS?s Scientific Manuscript database
Numerous studies have indicated that increases in atmospheric CO2 have the potential to decrease nitrogen availability through the process of progressive nitrogen limitation (PNL). The timing and magnitude of PNL in field experiments is varied due to numerous ecosystem processes. Here we examined th...
A Numerical Study of Hypersonic Forebody/Inlet Integration Problem
NASA Technical Reports Server (NTRS)
Kumar, Ajay
1991-01-01
A numerical study of hypersonic forebody/inlet integration problem is presented in the form of the view-graphs. The following topics are covered: physical/chemical modeling; solution procedure; flow conditions; mass flow rate at inlet face; heating and skin friction loads; 3-D forebogy/inlet integration model; and sensitivity studies.
NASA Astrophysics Data System (ADS)
Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.
2012-05-01
The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.
New technologies for advanced three-dimensional optimum shape design in aeronautics
NASA Astrophysics Data System (ADS)
Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno
1999-05-01
The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
A gradient based algorithm to solve inverse plane bimodular problems of identification
NASA Astrophysics Data System (ADS)
Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing
2018-02-01
This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.
Buckling analysis of variable thickness nanoplates using nonlocal continuum mechanics
NASA Astrophysics Data System (ADS)
Farajpour, Ali; Danesh, Mohammad; Mohammadi, Moslem
2011-12-01
This paper presents an investigation on the buckling characteristics of nanoscale rectangular plates under bi-axial compression considering non-uniformity in the thickness. Based on the nonlocal continuum mechanics, governing differential equations are derived. Numerical solutions for the buckling loads are obtained using the Galerkin method. The present study shows that the buckling behaviors of single-layered graphene sheets (SLGSs) are strongly sensitive to the nonlocal and non-uniform parameters. The influence of percentage change of thickness on the stability of SLGSs is more significant in the strip-type nonoplates (nanoribbons) than in the square-type nanoplates.
Modelling Accuracy of a Car Steering Mechanism with Rack and Pinion and McPherson Suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-08-01
Modelling accuracy of a car steering mechanism with a rack and pinion and McPherson suspension is analyzed. Geometrical parameters of the model are described by using the coordinates of centers of spherical joints, directional unit vectors and axis points of revolute, cylindrical and prismatic joints. Modelling accuracy is assumed as the differences between the values of the wheel knuckle position and orientation coordinates obtained using a simulation model and the corresponding measured values. The sensitivity analysis of the parameters on the model accuracy is illustrated by two numerical examples.
A model for active control of helicopter air resonance in hover and forward flight
NASA Technical Reports Server (NTRS)
Takahashi, M. D.; Friedmann, P. P.
1988-01-01
A coupled rotor/fuselage helicopter analysis is presented. The accuracy of the model is verified by comparing it with the experimental data. The sensitivity of the open loop damping of the unstable air resonance mode to such modeling effects as blade torsional flexibility, unsteady aerodynamics, forward flight, periodic terms, and trim solution is illustrated by numerous examples. Subsequently, the model is used in conjunction with linear optimal control theory to stabilize the air resonance mode. The influence of the modeling effects mentioned before on active air resonance control is then investigated.
Use of SSDs in the USA – endangered species and water quality criteria
Species sensitivity distributions (SSDs) are used in the United States (US) in the development of national ambient water quality criteria (AWQC), with site-specific and numeric modifications to protect sensitive taxa including threatened and endangered species. The US Environment...
Numerical study of time domain analogy applied to noise prediction from rotating blades
NASA Astrophysics Data System (ADS)
Fedala, D.; Kouidri, S.; Rey, R.
2009-04-01
Aeroacoustic formulations in time domain are frequently used to model the aerodynamic sound of airfoils, the time data being more accessible. The formulation 1A developed by Farassat, an integral solution of the Ffowcs Williams and Hawkings equation, holds great interest because of its ability to handle surfaces in arbitrary motion. The aim of this work is to study the numerical sensitivity of this model to specified parameters used in the calculation. The numerical algorithms, spatial and time discretizations, and approximations used for far-field acoustic simulation are presented. An approach of quantifying of the numerical errors resulting from implementation of formulation 1A is carried out based on Isom's and Tam's test cases. A helicopter blade airfoil, as defined by Farassat to investigate Isom's case, is used in this work. According to Isom, the acoustic response of a dipole source with a constant aerodynamic load, ρ0c02, is equal to the thickness noise contribution. Discrepancies are observed when the two contributions are computed numerically. In this work, variations of these errors, which depend on the temporal resolution, Mach number, source-observer distance, and interpolation algorithm type, are investigated. The results show that the spline interpolating algorithm gives the minimum error. The analysis is then extended to Tam's test case. Tam's test case has the advantage of providing an analytical solution for the first harmonic of the noise produced by a specific force distribution.
Characteristic analysis of surface waves in a sensitive plasma absorption probe
NASA Astrophysics Data System (ADS)
You, Wei; Li, Hong; Tan, Mingsheng; Liu, Wandong
2018-01-01
With features that are simple to construct and a symmetric configuration, the sensitive plasma absorption probe (SPAP) is a dependable probe for industry plasma diagnosis. The minimum peak in the characteristic curve of the coefficient of reflection stems from the surface wave resonance in plasma. We use numerical simulation methods to analyse the details of the excitation and propagation of these surface waves. With this method, the electromagnetic field structure and the resonance and propagation characteristics of the surface wave were analyzed simultaneously using the simulation method. For this SPAP structure, there are three different propagation paths for the propagating plasma surface wave. The propagation characteristic of the surface wave along each path is presented. Its dispersion relation is also calculated. The objective is to complete the relevant theory of the SPAP as well as the propagation process of the plasma surface wave.
Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.
NASA Astrophysics Data System (ADS)
Deng, Yan; Cao, Guangtao; Yang, Hui
2018-02-01
Actively tunable sharp asymmetric line shape and high-sensitivity sensor with high figure of merit (FOM) are analytically and numerically demonstrated in plasmonic coupled cavities. The Fano resonance, originating from the interference between different light pathways, is realized and effectively tuned in on-chip nanostructure composed of metal-dielectric-metal (MDM) waveguide and a pair of cavities. To investigate in detail the Fano line shape, the coupled cavities are taken as a composite cavity, and a dynamic theory is proposed, which agrees well with the numerical simulations. Subsequently, the sensing performances of the plasmonic structure is discussed and its detection sensitivity reaches 1.103 × 108. Moreover, the FOM of the plasmonic sensor can approach 2.33 × 104. These discoveries hold potential applications for on-chip nano-sensors in highly integrated photonic devices.
Wang, Li; Wang, Xiaochun; Li, Yuting; Han, Shichao; Zhu, Jinming; Wang, Xiaofang; Molkentine, David P; Blanchard, Pierre; Yang, Yining; Zhang, Ruiping; Sahoo, Narayan; Gillin, Michael; Zhu, Xiaorong Ronald; Zhang, Xiaodong; Myers, Jeffrey N; Frank, Steven J
2017-04-01
Human papillomavirus (HPV)-positive oropharyngeal carcinomas response better to X-ray therapy (XRT) than HPV-negative disease. Whether HPV status influences the sensitivity of head and neck cancer cells to proton therapy or the relative biological effectiveness (RBE) of protons versus XRT is unknown. Clonogenic survival was used to calculate the RBE; immunocytochemical analysis and neutral comet assay were used to evaluate unrepaired DNA double-strand breaks. HPV-positive cells were more sensitive to protons and the unrepaired double-strand breaks were more numerous in HPV-positive cells than in HPV-negative cells (p < .001). Protons killed more cells than did XRT at all fraction sizes (all RBEs > 1.06). Cell line type and radiation fraction size influenced the RBE. HPV-positive cells were more sensitive to protons than HPV-negative cells maybe through the effects of HPV on DNA damage and repair. The RBE for protons depends more on cell type and fraction size than on HPV status. © 2016 Wiley Periodicals, Inc. Head Neck 39: 708-715, 2017. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sévellec, Florian; Fedorov, Alexey V.
2016-09-01
Oceanic northward heat transport is commonly assumed to be positively correlated with the Atlantic meridional overturning circulation (AMOC). For example, in numerical "water-hosing" experiments, imposing anomalous freshwater fluxes in the northern Atlantic leads to a slow-down of the AMOC and the corresponding reduction of oceanic northward heat transport. Here, we study the sensitivity of the ocean heat and volume transports to surface heat and freshwater fluxes using a generalized stability analysis. For the sensitivity to surface freshwater fluxes, we find that, while the direct relationship between the AMOC volume and heat transports holds on shorter time scales, it can reverse on timescales longer than 500 years or so. That is, depending on the model surface boundary conditions, reduction in the AMOC volume transport can potentially lead to a stronger heat transport on long timescales, resulting from the gradual increase in ocean thermal stratification. We discuss the implications of these results for the problem of steady state (statistical equilibrium) in ocean and climate GCM as well as paleoclimate problems including millennial climate variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevellec, Florian; Fedorov, Alexey V.
Oceanic northward heat transport is commonly assumed to be positively correlated with the Atlantic meridional overturning circulation (AMOC). For example, in numerical "water-hosing" experiments, imposing anomalous freshwater fluxes in the northern Atlantic leads to a slow-down of the AMOC and the corresponding reduction of oceanic northward heat transport. Here, we study the sensitivity of the ocean heat and volume transports to surface heat and freshwater fluxes using a generalized stability analysis. For the sensitivity to surface freshwater fluxes, we find that, while the direct relationship between the AMOC volume and heat transports holds on shorter time scales, it can reversemore » on timescales longer than 500 years or so. That is, depending on the model surface boundary conditions, reduction in the AMOC volume transport can potentially lead to a stronger heat transport on long timescales, resulting from the gradual increase in ocean thermal stratification. Finally, we discuss the implications of these results for the problem of steady state (statistical equilibrium) in ocean and climate GCM as well as paleoclimate problems including millennial climate variability.« less
Sevellec, Florian; Fedorov, Alexey V.
2016-01-04
Oceanic northward heat transport is commonly assumed to be positively correlated with the Atlantic meridional overturning circulation (AMOC). For example, in numerical "water-hosing" experiments, imposing anomalous freshwater fluxes in the northern Atlantic leads to a slow-down of the AMOC and the corresponding reduction of oceanic northward heat transport. Here, we study the sensitivity of the ocean heat and volume transports to surface heat and freshwater fluxes using a generalized stability analysis. For the sensitivity to surface freshwater fluxes, we find that, while the direct relationship between the AMOC volume and heat transports holds on shorter time scales, it can reversemore » on timescales longer than 500 years or so. That is, depending on the model surface boundary conditions, reduction in the AMOC volume transport can potentially lead to a stronger heat transport on long timescales, resulting from the gradual increase in ocean thermal stratification. Finally, we discuss the implications of these results for the problem of steady state (statistical equilibrium) in ocean and climate GCM as well as paleoclimate problems including millennial climate variability.« less
Specific T-cell activation in an unspecific T-cell repertoire.
Van Den Berg, Hugo A; Molina-París, Carmen; Sewell, Andrew K
2011-01-01
T-cells are a vital type of white blood cell that circulate around our bodies, scanning for cellular abnormalities and infections. They recognise disease-associated antigens via a surface receptor called the T-cell antigen receptor (TCR). If there were a specific TCR for every single antigen, no mammal could possibly contain all the T-cells it needs. This is clearly absurd and suggests that T-cell recognition must, to the contrary, be highly degenerate. Yet highly promiscuous TCRs would appear to be equally impossible: they are bound to recognise self as well as non-self antigens. We review how contributions from mathematical analysis have helped to resolve the paradox of the promiscuous TCR. Combined experimental and theoretical work shows that TCR degeneracy is essentially dynamical in nature, and that the T-cell can differentially adjust its functional sensitivity to the salient epitope, "tuning up" sensitivity to the antigen associated with disease and "tuning down" sensitivity to antigens associated with healthy conditions. This paradigm of continual modulation affords the TCR repertoire, despite its limited numerical diversity, the flexibility to respond to almost any antigenic challenge while avoiding autoimmunity.
Evaluation of a strain-sensitive transport model in LES of turbulent nonpremixed sooting flames
NASA Astrophysics Data System (ADS)
Lew, Jeffry K.; Yang, Suo; Mueller, Michael E.
2017-11-01
Direct Numerical Simulations (DNS) of turbulent nonpremixed jet flames have revealed that Polycyclic Aromatic Hydrocarbons (PAH) are confined to spatially intermittent regions of low scalar dissipation rate due to their slow formation chemistry. The length scales of these regions are on the order of the Kolmogorov scale or smaller, where molecular diffusion effects dominate over turbulent transport effects irrespective of the large-scale turbulent Reynolds number. A strain-sensitive transport model has been developed to identify such species whose slow chemistry, relative to local mixing rates, confines them to these small length scales. In a conventional nonpremixed ``flamelet'' approach, these species are then modeled with their molecular Lewis numbers, while remaining species are modeled with an effective unity Lewis number. A priori analysis indicates that this strain-sensitive transport model significantly affects PAH yield in nonpremixed flames with essentially no impact on temperature and major species. The model is applied with Large Eddy Simulation (LES) to a series of turbulent nonpremixed sooting jet flames and validated via comparisons with experimental measurements of soot volume fraction.
A Numerical Estimate of The Impact of The Saharan Dust On Medityerranean Trophic Web
NASA Astrophysics Data System (ADS)
Crise, A.; Crispi, G.
A first estimate of the importance of Saharan dust as input of macronutrients on the phytoplankton standing crop concentration and primary production at basin scale is here presented using a three-dimensional numerical model of the Mediterranean Sea. The numerical scheme adopted is a 1/4 degree resolution 31 levels MOM-based eco- hydrodynamical model with climatological ('perpetual year') forcings coupled on-line with a structure including multi-nutrient, size-fractionated phytoplankton functional groups, herbivores and a parametrized recycling detritus submodel, so to (explicitely or implicitely) include the major energy pathways of the upper layer mediterranean ecosystem. This model takes into account as potential limiting factors, among others, Nitrogen (in its oxidized and reduced forms) and Phosphorus. A gridded data setof (wet and dry) dust deposition over Mediterranean derived from SKIRON operational model is used to identify statistically the areas and the duration/intensity of the events. Starting from this averaging process, experiments are carried out to study the dust induced episodes of release of bioavailable phosphorus which is supposed to be the limiting factor in the oligotrophic waters of the surface layer in Med Sea. The metrics for the evaluation of the impact of deposition have been identified in phyto standing crop, primary and export production and switching in the food web functioning. These global parameters, even if cannot exaust the whealth of the informations provided by the model, can help discriminate the sensitivity of food web to the nutrient pulses induced by the deposition. First results of a scenario analysis of typical atmospheric input events, provide evidence of the response of the upper layer ecosystem to assess the sensitivity of the model predictions to the variability to integrated intensity of external input.
NASA Astrophysics Data System (ADS)
Papán, Daniel; Valašková, Veronika; Demeterová, Katarína
2016-10-01
The numerical and experimental approach in structural dynamics problems is more and more current nowadays. This approach is applied and solved in many research and developing institutions of the all the world. Vibrations effect caused by passing trains used in manufacturing facilities can affect the quality of the production activity. This effect is possible to be solved by a numerical or an experimental way. Numerical solution is not so financially and time demanding. The main aim of this article is to focus on just experimental measurement of this problem. In this paper, the case study with measurement due to cramped conditions realized in situ is presented. The case study is located close to railway. The vibration effect caused by passing trains on the high-sensitivity machinery contained in this object were observed. The structure was a high-sensitivity machine that was placed in a construction process. For the measurements, the high-sensitivity standard vibrations equipment was used. The assessments of measurements’ results were performed for the technological conditions and Slovak Standard Criteria. Both of these assessments were divided to amplitude and frequency domain. The amplitude criterion is also divided to peak particle velocity and RMS (Root Mean Square). Frequency domain assessment were realised using the frequency response curves obtained from high-sensitivity machinery manufacturer. The frequency limits are established for each axis of triaxle system. The measurement results can be predicted if the vibration have to be reduced. Measurement implemented in the production hall should obtain materials to determine the seismic loading and response of production machinery caused by technical seismicity.
Flexible Environmental Modeling with Python and Open - GIS
NASA Astrophysics Data System (ADS)
Pryet, Alexandre; Atteia, Olivier; Delottier, Hugo; Cousquer, Yohann
2015-04-01
Numerical modeling now represents a prominent task of environmental studies. During the last decades, numerous commercial programs have been made available to environmental modelers. These software applications offer user-friendly graphical user interfaces that allow an efficient management of many case studies. However, they suffer from a lack of flexibility and closed-source policies impede source code reviewing and enhancement for original studies. Advanced modeling studies require flexible tools capable of managing thousands of model runs for parameter optimization, uncertainty and sensitivity analysis. In addition, there is a growing need for the coupling of various numerical models associating, for instance, groundwater flow modeling to multi-species geochemical reactions. Researchers have produced hundreds of open-source powerful command line programs. However, there is a need for a flexible graphical user interface allowing an efficient processing of geospatial data that comes along any environmental study. Here, we present the advantages of using the free and open-source Qgis platform and the Python scripting language for conducting environmental modeling studies. The interactive graphical user interface is first used for the visualization and pre-processing of input geospatial datasets. Python scripting language is then employed for further input data processing, call to one or several models, and post-processing of model outputs. Model results are eventually sent back to the GIS program, processed and visualized. This approach combines the advantages of interactive graphical interfaces and the flexibility of Python scripting language for data processing and model calls. The numerous python modules available facilitate geospatial data processing and numerical analysis of model outputs. Once input data has been prepared with the graphical user interface, models may be run thousands of times from the command line with sequential or parallel calls. We illustrate this approach with several case studies in groundwater hydrology and geochemistry and provide links to several python libraries that facilitate pre- and post-processing operations.
Numerical modeling of the dynamic response of a bioluminescent bacterial biosensor.
Affi, Mahmoud; Solliec, Camille; Legentilhomme, Patrick; Comiti, Jacques; Legrand, Jack; Jouanneau, Sulivan; Thouand, Gérald
2016-12-01
Water quality and water management are worldwide issues. The analysis of pollutants and in particular, heavy metals, is generally conducted by sensitive but expensive physicochemical methods. Other alternative methods of analysis, such as microbial biosensors, have been developed for their potential simplicity and expected moderate cost. Using a biosensor for a long time generates many changes in the growth of the immobilized bacteria and consequently alters the robustness of the detection. This work simulated the operation of a biosensor for the long-term detection of cadmium and improved our understanding of the bioluminescence reaction dynamics of bioreporter bacteria inside an agarose matrix. The choice of the numerical tools is justified by the difficulty to measure experimentally in every condition the biosensor functioning during a long time (several days). The numerical simulation of a biomass profile is made by coupling the diffusion equation and the consumption/reaction of the nutrients by the bacteria. The numerical results show very good agreement with the experimental profiles. The growth model verified that the bacterial growth is conditioned by both the diffusion and the consumption of the nutrients. Thus, there is a high bacterial density in the first millimeter of the immobilization matrix. The growth model has been very useful for the development of the bioluminescence model inside the gel and shows that a concentration of oxygen greater than or equal to 22 % of saturation is required to maintain a significant level of bioluminescence. A continuous feeding of nutrients during the process of detection of cadmium leads to a biofilm which reduces the diffusion of nutrients and restricts the presence of oxygen from the first layer of the agarose (1 mm) and affects the intensity of the bioluminescent reaction. The main advantage of this work is to link experimental works with numerical models of growth and bioluminescence in order to provide a general purpose model to understand, anticipate, or predict the dysfunction of a biosensor using immobilized bioluminescent bioreporter in a matrix.
Computational modeling of mediator oxidation by oxygen in an amperometric glucose biosensor.
Simelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Razumienė, Julija
2014-02-07
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior.
Computational Modeling of Mediator Oxidation by Oxygen in an Amperometric Glucose Biosensor
Šimelevičius, Dainius; Petrauskas, Karolis; Baronas, Romas; Julija, Razumienė
2014-01-01
In this paper, an amperometric glucose biosensor is modeled numerically. The model is based on non-stationary reaction-diffusion type equations. The model consists of four layers. An enzyme layer lies directly on a working electrode surface. The enzyme layer is attached to an electrode by a polyvinyl alcohol (PVA) coated terylene membrane. This membrane is modeled as a PVA layer and a terylene layer, which have different diffusivities. The fourth layer of the model is the diffusion layer, which is modeled using the Nernst approach. The system of partial differential equations is solved numerically using the finite difference technique. The operation of the biosensor was analyzed computationally with special emphasis on the biosensor response sensitivity to oxygen when the experiment was carried out in aerobic conditions. Particularly, numerical experiments show that the overall biosensor response sensitivity to oxygen is insignificant. The simulation results qualitatively explain and confirm the experimentally observed biosensor behavior. PMID:24514882
NASA Astrophysics Data System (ADS)
WANG, J.; Kim, J.
2014-12-01
In this study, sensitivity of pollutant dispersion on turbulent Schmidt number (Sct) was investigated in a street canyon using a computational fluid dynamics (CFD) model. For this, numerical simulations with systematically varied Sct were performed and the CFD model results were validated against a wind‒tunnel measurement data. The results showed that root mean square error (RMSE) was quite dependent on Sct and dispersion patterns of non‒reactive scalar pollutant with different Sct were quite different among the simulation results. The RMSE was lowest in the case of Sct = 0.35 and the apparent dispersion pattern was most similar to the wind‒tunnel data in the case of Sct = 0.35. Also, numerical simulations using spatially weighted Sct were additionally performed in order for the best reproduction of the wind‒tunnel data. Detailed method and procedure to find the best reproduction will be presented.
Making and Testing Hybrid Gravitational Waves from Colliding Black Holes and Neutron Stars
NASA Astrophysics Data System (ADS)
Garcia, Alyssa; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
The Laser Interferometer Gravitational-wave Observatory (LIGO) is a detector that is currently working to observe gravitational waves (GW) from astronomical sources, such as colliding black holes and neutron stars, which are among LIGO's most promising sources. Observing as many waves as possible requires accurate predictions of what the waves look like, which are only possible with numerical simulations. In this poster, I will present results from new simulations of colliding black holes made using the Spectral Einstein Code (SpEC). In particular, I will present results for extending new and existing waveforms and using an open-source library. To construct a waveform that spans the frequency range where LIGO is most sensitive, we combine inexpensive, post-Newtonian approximate waveforms (valid far from merger) and numerical relativity waveforms (valid near the time of merger, when all approximations fail), making a hybrid GW. This work is one part of a new prototype framework for Numerical INJection Analysis with Matter (Matter NINJA). The complete Matter NINJA prototype will test GW search pipelines' abilities to find hybrid waveforms, from simulations containing matter (such as black hole-neutron star binaries), hidden in simulated detector noise.
Full-degrees-of-freedom frequency based substructuring
NASA Astrophysics Data System (ADS)
Drozg, Armin; Čepon, Gregor; Boltežar, Miha
2018-01-01
Dividing the whole system into multiple subsystems and a separate dynamic analysis is common practice in the field of structural dynamics. The substructuring process improves the computational efficiency and enables an effective realization of the local optimization, modal updating and sensitivity analyses. This paper focuses on frequency-based substructuring methods using experimentally obtained data. An efficient substructuring process has already been demonstrated using numerically obtained frequency-response functions (FRFs). However, the experimental process suffers from several difficulties, among which, many of them are related to the rotational degrees of freedom. Thus, several attempts have been made to measure, expand or combine numerical correction methods in order to obtain a complete response model. The proposed methods have numerous limitations and are not yet generally applicable. Therefore, in this paper an alternative approach based on experimentally obtained data only, is proposed. The force-excited part of the FRF matrix is measured with piezoelectric translational and rotational direct accelerometers. The incomplete moment-excited part of the FRF matrix is expanded, based on the modal model. The proposed procedure is integrated in a Lagrange Multiplier Frequency Based Substructuring method and demonstrated on a simple beam structure, where the connection coordinates are mainly associated with the rotational degrees of freedom.
Numerical analysis of installation damage of a pre-damaged geogrid with rectangular apertures
NASA Astrophysics Data System (ADS)
Dong, Yan-li; Guo, Hui-juan; Han, Jie; Zhang, Jun
2018-06-01
The geogrid can be damaged in the process or during construction if sufficient care is not exercised. In this study, the numerical software-FLAC was adopted to investigate the responses of pre-damaged geogrids with rectangular apertures when subjected to a uniaxial tensile load at different directions relative to the orientations of ribs in air. To simulate the combined loss of ribs and junction strength, specimens were pre-damaged by reducing certain amount of stiffness of the geogrid ribs. The geogrid ribs were modeled using beam elements jointed rigidly at nodes and subjected to tension in one direction. The numerical study demonstrated that the pre-damaged geogrid with rectangular apertures had similar responses when it was subjected to tension at the loading directions. The pre-damaged geogrids under 30° tension are the most sensitivity to the damage. With the increase of the degree of damage, the tensile strengths decreased relative quickly. An increase of the degree of installation damage of ribs decreased the tensile strength/stiffness of the geogrid with rectangular apertures. A higher reduction factor RFID due to installation damage is suggested when the geogrid is subjected to 30° tension relative to the orientation of ribs.
Hydrothermal fluid flow and deformation in large calderas: Inferences from numerical simulations
Hurwitz, S.; Christiansen, L.B.; Hsieh, P.A.
2007-01-01
Inflation and deflation of large calderas is traditionally interpreted as being induced by volume change of a discrete source embedded in an elastic or viscoelastic half-space, though it has also been suggested that hydrothermal fluids may play a role. To test the latter hypothesis, we carry out numerical simulations of hydrothermal fluid flow and poroelastic deformation in calderas by coupling two numerical codes: (1) TOUGH2 [Pruess et al., 1999], which simulates flow in porous or fractured media, and (2) BIOT2 [Hsieh, 1996], which simulates fluid flow and deformation in a linearly elastic porous medium. In the simulations, high-temperature water (350??C) is injected at variable rates into a cylinder (radius 50 km, height 3-5 km). A sensitivity analysis indicates that small differences in the values of permeability and its anisotropy, the depth and rate of hydrothermal injection, and the values of the shear modulus may lead to significant variations in the magnitude, rate, and geometry of ground surface displacement, or uplift. Some of the simulated uplift rates are similar to observed uplift rates in large calderas, suggesting that the injection of aqueous fluids into the shallow crust may explain some of the deformation observed in calderas.
Numerosity Discrimination in Preschool Children
ERIC Educational Resources Information Center
Almeida, Alzira; Arantes, Joana; Machado, Armando
2007-01-01
We used a numerical bisection procedure to examine preschool children's sensitivity to the numerical attributes of stimuli. In Experiment 1 children performed two tasks. In the Cups Task they earned coins for choosing a green cup after two drumbeats and a blue cup after eight drumbeats. In the Gloves Task they earned coins for raising a red glove…
Dual-band plasmonic resonator based on Jerusalem cross-shaped nanoapertures
NASA Astrophysics Data System (ADS)
Cetin, Arif E.; Kaya, Sabri; Mertiri, Alket; Aslan, Ekin; Erramilli, Shyamsunder; Altug, Hatice; Turkmen, Mustafa
2015-06-01
In this paper, we both experimentally and numerically introduce a dual-resonant metamaterial based on subwavelength Jerusalem cross-shaped apertures. We numerically investigate the physical origin of the dual-resonant behavior, originating from the constituting aperture elements, through finite difference time domain calculations. Our numerical calculations show that at the dual-resonances, the aperture system supports large and easily accessible local electromagnetic fields. In order to experimentally realize the aperture system, we utilize a high-precision and lift-off free fabrication method based on electron-beam lithography. We also introduce a fine-tuning mechanism for controlling the dual-resonant spectral response through geometrical device parameters. Finally, we show the aperture system's highly advantageous far- and near-field characteristics through numerical calculations on refractive index sensitivity. The quantitative analyses on the availability of the local fields supported by the aperture system are employed to explain the grounds behind the sensitivity of each spectral feature within the dual-resonant behavior. Possessing dual-resonances with large and accessible electromagnetic fields, Jerusalem cross-shaped apertures can be highly advantageous for wide range of applications demanding multiple spectral features with strong nearfield characteristics.
NASA Astrophysics Data System (ADS)
Long, Kai; Wang, Xuan; Gu, Xianguang
2017-09-01
The present work introduces a novel concurrent optimization formulation to meet the requirements of lightweight design and various constraints simultaneously. Nodal displacement of macrostructure and effective thermal conductivity of microstructure are regarded as the constraint functions, which means taking into account both the load-carrying capabilities and the thermal insulation properties. The effective properties of porous material derived from numerical homogenization are used for macrostructural analysis. Meanwhile, displacement vectors of macrostructures from original and adjoint load cases are used for sensitivity analysis of the microstructure. Design variables in the form of reciprocal functions of relative densities are introduced and used for linearization of the constraint function. The objective function of total mass is approximately expressed by the second order Taylor series expansion. Then, the proposed concurrent optimization problem is solved using a sequential quadratic programming algorithm, by splitting into a series of sub-problems in the form of the quadratic program. Finally, several numerical examples are presented to validate the effectiveness of the proposed optimization method. The various effects including initial designs, prescribed limits of nodal displacement, and effective thermal conductivity on optimized designs are also investigated. An amount of optimized macrostructures and their corresponding microstructures are achieved.
NASA Astrophysics Data System (ADS)
Prévereaud, Y.; Vérant, J.-L.; Balat-Pichelin, M.; Moschetta, J.-M.
2016-05-01
To answer the question of space debris survivability during atmospheric entry ONERA uses its software named MUSIC/FAST. So, the first part of this paper is dedicated to the presentation of the ONERA tool and its validation by comparison with flight data and CFD computations. However, the influence of oxidation on the thermal degradation process and material properties in atmospheric entry conditions is still unknown. A second step is then devoted to the presentation of an experimental campaign investigating TA6V oxidation in atmospheric entry conditions, as the most of the debris found on ground are made of this material. Experiments have been realized using the MESOX facility implemented at the 6 kW solar furnace in PROMES-CNRS laboratory. Finally, an application of MUSIC/FAST is proposed on the atmospheric re-entry of a generic TA6V tank. Aiming at degradation assessment, a sensitive study to initial conditions is conducted. To complete computational analysis regarding degradation process by melting, a numerical analysis of the influence of oxidation on the thermal wall degradation during the tank atmospheric re-entry is presented as well.
NASA Astrophysics Data System (ADS)
Premraj, D.; Suresh, K.; Palanivel, J.; Thamilmaran, K.
2017-09-01
A periodically forced series LCR circuit with Chua's diode as a nonlinear element exhibits slow passage through Hopf bifurcation. This slow passage leads to a delay in the Hopf bifurcation. The delay in this bifurcation is a unique quantity and it can be predicted using various numerical analysis. We find that when an additional periodic force is added to the system, the delay in bifurcation becomes chaotic which leads to an unpredictability in bifurcation delay. Further, we study the bifurcation of the periodic delay to chaotic delay in the slow passage effect through strange nonchaotic delay. We also report the occurrence of strange nonchaotic dynamics while varying the parameter of the additional force included in the system. We observe that the system exhibits a hitherto unknown dynamical transition to a strange nonchaotic attractor. With the help of Lyapunov exponent, we explain the new transition to strange nonchaotic attractor and its mechanism is studied by making use of rational approximation theory. The birth of SNA has also been confirmed numerically, using Poincaré maps, phase sensitivity exponent, the distribution of finite-time Lyapunov exponents and singular continuous spectrum analysis.
Lorentz force electrical impedance tomography using magnetic field measurements.
Zengin, Reyhan; Gençer, Nevzat Güneri
2016-08-21
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from [Formula: see text] to [Formula: see text] at intervals of [Formula: see text]. The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with [Formula: see text] mm size can be detected up to a 3.5 cm depth.
Lorentz force electrical impedance tomography using magnetic field measurements
NASA Astrophysics Data System (ADS)
Zengin, Reyhan; Güneri Gençer, Nevzat
2016-08-01
In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from -{{25}\\circ} to {{25}\\circ} at intervals of {{5}\\circ} . The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with 5~\\text{mm}× 5 mm size can be detected up to a 3.5 cm depth.
Airplane numerical simulation for the rapid prototyping process
NASA Astrophysics Data System (ADS)
Roysdon, Paul F.
Airplane Numerical Simulation for the Rapid Prototyping Process is a comprehensive research investigation into the most up-to-date methods for airplane development and design. Uses of modern engineering software tools, like MatLab and Excel, are presented with examples of batch and optimization algorithms which combine the computing power of MatLab with robust aerodynamic tools like XFOIL and AVL. The resulting data is demonstrated in the development and use of a full non-linear six-degrees-of-freedom simulator. The applications for this numerical tool-box vary from un-manned aerial vehicles to first-order analysis of manned aircraft. A Blended-Wing-Body airplane is used for the analysis to demonstrate the flexibility of the code from classic wing-and-tail configurations to less common configurations like the blended-wing-body. This configuration has been shown to have superior aerodynamic performance -- in contrast to their classic wing-and-tube fuselage counterparts -- and have reduced sensitivity to aerodynamic flutter as well as potential for increased engine noise abatement. Of course without a classic tail elevator to damp the nose up pitching moment, and the vertical tail rudder to damp the yaw and possible rolling aerodynamics, the challenges in lateral roll and yaw stability, as well as pitching moment are not insignificant. This thesis work applies the tools necessary to perform the airplane development and optimization on a rapid basis, demonstrating the strength of this tool through examples and comparison of the results to similar airplane performance characteristics published in literature.
NASA Astrophysics Data System (ADS)
Kang, Dong-Keun; Kim, Chang-Wan; Yang, Hyun-Ik
2017-01-01
In the present study we carried out a dynamic analysis of a CNT-based mass sensor by using a finite element method (FEM)-based nonlinear analysis model of the CNT resonator to elucidate the combined effects of thermal effects and nonlinear oscillation behavior upon the overall mass detection sensitivity. Mass sensors using carbon nanotube (CNT) resonators provide very high sensing performance. Because CNT-based resonators can have high aspect ratios, they can easily exhibit nonlinear oscillation behavior due to large displacements. Also, CNT-based devices may experience high temperatures during their manufacture and operation. These geometrical nonlinearities and temperature changes affect the sensing performance of CNT-based mass sensors. However, it is very hard to find previous literature addressing the detection sensitivity of CNT-based mass sensors including considerations of both these nonlinear behaviors and thermal effects. We modeled the nonlinear equation of motion by using the von Karman nonlinear strain-displacement relation, taking into account the additional axial force associated with the thermal effect. The FEM was employed to solve the nonlinear equation of motion because it can effortlessly handle the more complex geometries and boundary conditions. A doubly clamped CNT resonator actuated by distributed electrostatic force was the configuration subjected to the numerical experiments. Thermal effects upon the fundamental resonance behavior and the shift of resonance frequency due to attached mass, i.e., the mass detection sensitivity, were examined in environments of both high and low (or room) temperature. The fundamental resonance frequency increased with decreasing temperature in the high temperature environment, and increased with increasing temperature in the low temperature environment. The magnitude of the shift in resonance frequency caused by an attached mass represents the sensing performance of a mass sensor, i.e., its mass detection sensitivity, and it can be seen that this shift is affected by the temperature change and the amount of electrostatic force. The thermal effects on the mass detection sensitivity are intensified in the linear oscillation regime and increase with increasing CNT length; this intensification can either improve or worsen the detection sensitivity.
NASA Astrophysics Data System (ADS)
Stoll, Heather
2013-04-01
A computer modeling exercise was created to allows students to investigate the consequences of fossil fuel burning and land use change on the amount of carbon dioxide in the atmosphere. Students work with a simple numerical model of the carbon cycle which is rendered in Excel, and conduct a set of different sensitivity tests with different amounts and rate of C additions, and then graph and discuss their results. In the recommended approach, the model is provided to students without the biosphere and in class the formulas to integrate this module are typed into Excel simultaneously by instructor and students, helping students understand how the larger model is set up. In terms of content, students learn to recognize the redistribution of fossil fuel carbon between the ocean and atmosphere, and distinguish the consequences of rapid vs slow rates of addition of fossil fuel CO2 and the reasons for this difference. Students become familiar with the use of formulas in Excel and working with a large (300 rows, 20 columns) worksheet and gain competence in graphical representation of multiple scenarios. Students learn to appreciate the power and limitations of numerical models of complex cycles, the concept of inverse and forward models, and sensitivity tests. Finally, students learn that a reasonable hypothesis, may be "reasonable" but still not quantitatively sufficient - in this case, that the "Industrial Revolution" was not the source of increasing atmospheric CO2 from 1750-1900. The described activity is available to educators on the Teach the Earth portal of the Science Education Research Center (SERC) http://serc.carleton.edu/quantskills/activities/68751.html.
Four-Dimensional Data Assimilation Using the Adjoint Method
NASA Astrophysics Data System (ADS)
Bao, Jian-Wen
The calculus of variations is used to confirm that variational four-dimensional data assimilation (FDDA) using the adjoint method can be implemented when the numerical model equations have a finite number of first-order discontinuous points. These points represent the on/off switches associated with physical processes, for which the Jacobian matrix of the model equation does not exist. Numerical evidence suggests that, in some situations when the adjoint method is used for FDDA, the temperature field retrieved using horizontal wind data is numerically not unique. A physical interpretation of this type of non-uniqueness of the retrieval is proposed in terms of energetics. The adjoint equations of a numerical model can also be used for model-parameter estimation. A general computational procedure is developed to determine the size and distribution of any internal model parameter. The procedure is then applied to a one-dimensional shallow -fluid model in the context of analysis-nudging FDDA: the weighting coefficients used by the Newtonian nudging technique are determined. The sensitivity of these nudging coefficients to the optimal objectives and constraints is investigated. Experiments of FDDA using the adjoint method are conducted using the dry version of the hydrostatic Penn State/NCAR mesoscale model (MM4) and its adjoint. The minimization procedure converges and the initialization experiment is successful. Temperature-retrieval experiments involving an assimilation of the horizontal wind are also carried out using the adjoint of MM4.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1990-01-01
The time-dependent three-dimensional Euler equations of gas dynamics were solved numerically to study the steady compressible transonic flow about ducted propfan propulsion systems. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. An implicit residual smoothing operator was used to aid convergence. Two calculation grids were employed in this study. The first grid utilized an H-type mesh network with a branch cut opening to represent the axisymmetric cowl. The second grid utilized a multiple-block mesh system with a C-type grid about the cowl. The individual blocks were numerically coupled in the Euler solver. Grid systems were generated by a combined algebraic/elliptic algortihm developed specifically for ducted propfans. Numerical calculations were initially performed for unducted propfans to verify the accuracy of the three-dimensional Euler formulation. The Euler analyses were then applied for the calculation of ducted propfan flows, and predicted results were compared with experimental data for two cases. The three-dimensional Euler analyses displayed exceptional accuracy, although certain parameters were observed to be very sensitive to geometric deflections. Both solution schemes were found to be very robust and demonstrated nearly equal efficiency and accuracy, although it was observed that the multi-block C-grid formulation provided somewhat better resolution of the cowl leading edge region.
NASA Astrophysics Data System (ADS)
Ren, Baiyang; Lissenden, Cliff J.
2018-04-01
Guided waves have been extensively studied and widely used for structural health monitoring because of their large volumetric coverage and good sensitivity to defects. Effectively and preferentially exciting a desired wave mode having good sensitivity to a certain defect is of great practical importance. Piezoelectric discs and plates are the most common types of surface-mounted transducers for guided wave excitation and reception. Their geometry strongly influences the proportioning between excited modes as well as the total power of the excited modes. It is highly desirable to predominantly excite the selected mode while the total transduction power is maximized. In this work, a fully coupled multi-physics finite element analysis, which incorporates the driving circuit, the piezoelectric element and the wave guide, is combined with the normal mode expansion method to study both the mode tuning and total wave power. The excitation of circular crested waves in an aluminum plate with circular piezoelectric discs is numerically studied for different disc and adhesive thicknesses. Additionally, the excitation of plane waves in an aluminum plate, using a stripe piezoelectric element is studied both numerically and experimentally. It is difficult to achieve predominant single mode excitation as well as maximum power transmission simultaneously, especially for higher order modes. However, guidelines for designing the geometry of piezoelectric elements for optimal mode excitation are recommended.
Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel
2017-01-01
Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal’s energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum. PMID:28695119
Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel
2017-01-01
Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal's energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum.
A new methodology to determine kinetic parameters for one- and two-step chemical models
NASA Technical Reports Server (NTRS)
Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.
1996-01-01
In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.
Data assimilation of GNSS zenith total delays from a Nordic processing centre
NASA Astrophysics Data System (ADS)
Lindskog, Magnus; Ridal, Martin; Thorsteinsson, Sigurdur; Ning, Tong
2017-11-01
Atmospheric moisture-related information estimated from Global Navigation Satellite System (GNSS) ground-based receiver stations by the Nordic GNSS Analysis Centre (NGAA) have been used within a state-of-the-art kilometre-scale numerical weather prediction system. Different processing techniques have been implemented to derive the moisture-related GNSS information in the form of zenith total delays (ZTDs) and these are described and compared. In addition full-scale data assimilation and modelling experiments have been carried out to investigate the impact of utilizing moisture-related GNSS data from the NGAA processing centre on a numerical weather prediction (NWP) model initial state and on the ensuing forecast quality. The sensitivity of results to aspects of the data processing, station density, bias-correction and data assimilation have been investigated. Results show benefits to forecast quality when using GNSS ZTD as an additional observation type. The results also show a sensitivity to thinning distance applied for GNSS ZTD observations but not to modifications to the number of predictors used in the variational bias correction applied. In addition, it is demonstrated that the assimilation of GNSS ZTD can benefit from more general data assimilation enhancements and that there is an interaction of GNSS ZTD with other types of observations used in the data assimilation. Future plans include further investigation of optimal thinning distances and application of more advanced data assimilation techniques.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
Rotem, Avital; Henik, Avishai
2015-01-01
The current study examined the development of two effects that have been found in single-digit multiplication errors: relatedness and distance. Typically achieving (TA) second, fourth, and sixth graders and adults, and sixth and eighth graders with a mathematics learning disability (MLD) performed a verification task. Relatedness was defined by a slow and inaccurate response to false results that were related to one of the operands via a shared multiplication row (e.g., 3 × 4 = 16). Distance was defined by a slow and inaccurate response to false results that were close in magnitude to the true result (e.g., 6 × 8 = 49). The presence of these effects indicates that participants are sensitive to numerical features of products. TA children demonstrated sensitivity to relatedness and distance from second grade onward. With age their sensitivity expanded from easy problems (e.g., 2 × 3) to difficult ones (e.g., 8 × 9). Children with MLD were sensitive to relatedness on easy problems. Their sensitivity to distance differed from the pattern seen in sixth grade and was partial in eighth grade. The presence of numerical sensitivity in children with MLD calls for instructional methods that would further develop their number sense. © Hammill Institute on Disabilities 2014.
Ultra-sensitive magnetic microscopy with an optically pumped magnetometer
Kim, Young Jin; Savukov, Igor Mykhaylovich
2016-04-22
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized devicemore » can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). Additionally, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.« less
Ultra-sensitive Magnetic Microscopy with an Optically Pumped Magnetometer
NASA Astrophysics Data System (ADS)
Kim, Young Jin; Savukov, Igor
2016-04-01
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized device can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). In addition, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.
Ultra-sensitive magnetic microscopy with an optically pumped magnetometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Jin; Savukov, Igor Mykhaylovich
Optically pumped magnetometers (OPMs) based on lasers and alkali-metal vapor cells are currently the most sensitive non-cryogenic magnetic field sensors. Many applications in neuroscience and other fields require high-resolution, high-sensitivity magnetic microscopic measurements. In order to meet this demand we combined a cm-size spin-exchange relaxation-free (SERF) OPM and flux guides (FGs) to realize an ultra-sensitive FG-OPM magnetic microscope. The FGs serve to transmit the target magnetic flux to the OPM thus improving both the resolution and sensitivity to small magnetic objects. We investigated the performance of the FG-OPM device using experimental and numerical methods, and demonstrated that an optimized devicemore » can achieve a unique combination of high resolution (80 μm) and high sensitivity (8.1 pT/). Additionally, we also performed numerical calculations of the magnetic field distribution in the FGs to estimate the magnetic noise originating from the domain fluctuations in the material of the FGs. We anticipate many applications of the FG-OPM device such as the detection of micro-biological magnetic fields; the detection of magnetic nano-particles; and non-destructive testing. From our theoretical estimate, an FG-OPM could detect the magnetic field of a single neuron, which would be an important milestone in neuroscience.« less
Spectro-spatial analysis of wave packet propagation in nonlinear acoustic metamaterials
NASA Astrophysics Data System (ADS)
Zhou, W. J.; Li, X. P.; Wang, Y. S.; Chen, W. Q.; Huang, G. L.
2018-01-01
The objective of this work is to analyze wave packet propagation in weakly nonlinear acoustic metamaterials and reveal the interior nonlinear wave mechanism through spectro-spatial analysis. The spectro-spatial analysis is based on full-scale transient analysis of the finite system, by which dispersion curves are generated from the transmitted waves and also verified by the perturbation method (the L-P method). We found that the spectro-spatial analysis can provide detailed information about the solitary wave in short-wavelength region which cannot be captured by the L-P method. It is also found that the optical wave modes in the nonlinear metamaterial are sensitive to the parameters of the nonlinear constitutive relation. Specifically, a significant frequency shift phenomenon is found in the middle-wavelength region of the optical wave branch, which makes this frequency region behave like a band gap for transient waves. This special frequency shift is then used to design a direction-biased waveguide device, and its efficiency is shown by numerical simulations.
Wong, Vanessa K; Baker, Stephen; Pickard, Derek J; Parkhill, Julian; Page, Andrew J; Feasey, Nicholas A; Kingsley, Robert A; Thomson, Nicholas R; Keane, Jacqueline A; Weill, François-Xavier; Edwards, David J; Hawkey, Jane; Harris, Simon R; Mather, Alison E; Cain, Amy K; Hadfield, James; Hart, Peter J; Thieu, Nga Tran Vu; Klemm, Elizabeth J; Glinos, Dafni A; Breiman, Robert F; Watson, Conall H; Kariuki, Samuel; Gordon, Melita A; Heyderman, Robert S; Okoro, Chinyere; Jacobs, Jan; Lunguya, Octavie; Edmunds, W John; Msefula, Chisomo; Chabalgoity, Jose A; Kama, Mike; Jenkins, Kylie; Dutta, Shanta; Marks, Florian; Campos, Josefina; Thompson, Corinne; Obaro, Stephen; MacLennan, Calman A; Dolecek, Christiane; Keddy, Karen H; Smith, Anthony M; Parry, Christopher M; Karkey, Abhilasha; Mulholland, E Kim; Campbell, James I; Dongol, Sabina; Basnyat, Buddha; Dufour, Muriel; Bandaranayake, Don; Naseri, Take Toleafoa; Singh, Shalini Pravin; Hatta, Mochammad; Newton, Paul; Onsare, Robert S; Isaia, Lupeoletalalei; Dance, David; Davong, Viengmon; Thwaites, Guy; Wijedoru, Lalith; Crump, John A; De Pinna, Elizabeth; Nair, Satheesh; Nilles, Eric J; Thanh, Duy Pham; Turner, Paul; Soeng, Sona; Valcanis, Mary; Powling, Joan; Dimovski, Karolina; Hogg, Geoff; Farrar, Jeremy; Holt, Kathryn E; Dougan, Gordon
2015-06-01
The emergence of multidrug-resistant (MDR) typhoid is a major global health threat affecting many countries where the disease is endemic. Here whole-genome sequence analysis of 1,832 Salmonella enterica serovar Typhi (S. Typhi) identifies a single dominant MDR lineage, H58, that has emerged and spread throughout Asia and Africa over the last 30 years. Our analysis identifies numerous transmissions of H58, including multiple transfers from Asia to Africa and an ongoing, unrecognized MDR epidemic within Africa itself. Notably, our analysis indicates that H58 lineages are displacing antibiotic-sensitive isolates, transforming the global population structure of this pathogen. H58 isolates can harbor a complex MDR element residing either on transmissible IncHI1 plasmids or within multiple chromosomal integration sites. We also identify new mutations that define the H58 lineage. This phylogeographical analysis provides a framework to facilitate global management of MDR typhoid and is applicable to similar MDR lineages emerging in other bacterial species.
Wong, Vanessa K; Baker, Stephen; Pickard, Derek J; Parkhill, Julian; Page, Andrew J; Feasey, Nicholas A; Kingsley, Robert A; Thomson, Nicholas R; Keane, Jacqueline A; Weill, François-Xavier; Edwards, David J; Hawkey, Jane; Harris, Simon R; Mather, Alison E; Cain, Amy K; Hadfield, James; Hart, Peter J; Thieu, Nga Tran Vu; Klemm, Elizabeth J; Glinos, Dafni A; Breiman, Robert F; Watson, Conall H; Kariuki, Samuel; Gordon, Melita A; Heyderman, Robert S; Okoro, Chinyere; Jacobs, Jan; Lunguya, Octavie; Edmunds, W John; Msefula, Chisomo; Chabalgoity, Jose A; Kama, Mike; Jenkins, Kylie; Dutta, Shanta; Marks, Florian; Campos, Josefina; Thompson, Corinne; Obaro, Stephen; MacLennan, Calman A; Dolecek, Christiane; Keddy, Karen H; Smith, Anthony M; Parry, Christopher M; Karkey, Abhilasha; Mulholland, E Kim; Campbell, James I; Dongol, Sabina; Basnyat, Buddha; Dufour, Muriel; Bandaranayake, Don; Naseri, Take Toleafoa; Singh, Shalini Pravin; Hatta, Mochammad; Newton, Paul; Onsare, Robert S; Isaia, Lupeoletalalei; Dance, David; Davong, Viengmon; Thwaites, Guy; Wijedoru, Lalith; Crump, John A; De Pinna, Elizabeth; Nair, Satheesh; Nilles, Eric J; Thanh, Duy Pham; Turner, Paul; Soeng, Sona; Valcanis, Mary; Powling, Joan; Dimovski, Karolina; Hogg, Geoff; Farrar, Jeremy; Holt, Kathryn E; Dougan, Gordon
2016-01-01
The emergence of multidrug-resistant (MDR) typhoid is a major global health threat affecting many countries where the disease is endemic. Here whole-genome sequence analysis of 1,832 Salmonella enterica serovar Typhi (S. Typhi) identifies a single dominant MDR lineage, H58, that has emerged and spread throughout Asia and Africa over the last 30 years. Our analysis identifies numerous transmissions of H58, including multiple transfers from Asia to Africa and an ongoing, unrecognized MDR epidemic within Africa itself. Notably, our analysis indicates that H58 lineages are displacing antibiotic-sensitive isolates, transforming the global population structure of this pathogen. H58 isolates can harbor a complex MDR element residing either on transmissible IncHI1 plasmids or within multiple chromosomal integration sites. We also identify new mutations that define the H58 lineage. This phylogeographical analysis provides a framework to facilitate global management of MDR typhoid and is applicable to similar MDR lineages emerging in other bacterial species. PMID:25961941
NASA Astrophysics Data System (ADS)
Schwarz, Massimiliano; Cohen, Denis
2017-04-01
Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.
Bile acids: analysis in biological fluids and tissues
Griffiths, William J.; Sjövall, Jan
2010-01-01
The formation of bile acids/bile alcohols is of major importance for the maintenance of cholesterol homeostasis. Besides their functions in lipid absorption, bile acids/bile alcohols are regulatory molecules for a number of metabolic processes. Their effects are structure-dependent, and numerous metabolic conversions result in a complex mixture of biologically active and inactive forms. Advanced methods are required to characterize and quantify individual bile acids in these mixtures. A combination of such analyses with analyses of the proteome will be required for a better understanding of mechanisms of action and nature of endogenous ligands. Mass spectrometry is the basic detection technique for effluents from chromatographic columns. Capillary liquid chromatography-mass spectrometry with electrospray ionization provides the highest sensitivity in metabolome analysis. Classical gas chromatography-mass spectrometry is less sensitive but offers extensive structure-dependent fragmentation increasing the specificity in analyses of isobaric isomers of unconjugated bile acids. Depending on the nature of the bile acid/bile alcohol mixture and the range of concentration of individuals, different sample preparation sequences, from simple extractions to group separations and derivatizations, are applicable. We review the methods currently available for the analysis of bile acids in biological fluids and tissues, with emphasis on the combination of liquid and gas phase chromatography with mass spectrometry. PMID:20008121
Evans, Alistair R.; McHenry, Colin R.
2015-01-01
The reliability of finite element analysis (FEA) in biomechanical investigations depends upon understanding the influence of model assumptions. In producing finite element models, surface mesh resolution is influenced by the resolution of input geometry, and influences the resolution of the ensuing solid mesh used for numerical analysis. Despite a large number of studies incorporating sensitivity studies of the effects of solid mesh resolution there has not yet been any investigation into the effect of surface mesh resolution upon results in a comparative context. Here we use a dataset of crocodile crania to examine the effects of surface resolution on FEA results in a comparative context. Seven high-resolution surface meshes were each down-sampled to varying degrees while keeping the resulting number of solid elements constant. These models were then subjected to bite and shake load cases using finite element analysis. The results show that incremental decreases in surface resolution can result in fluctuations in strain magnitudes, but that it is possible to obtain stable results using lower resolution surface in a comparative FEA study. As surface mesh resolution links input geometry with the resulting solid mesh, the implication of these results is that low resolution input geometry and solid meshes may provide valid results in a comparative context. PMID:26056620
Dahl, Jeffrey H; van Breemen, Richard B
2010-09-15
A rapid liquid chromatography-tandem mass spectrometry (LC-MS/MS) assay was developed for the measurement of urinary 8-iso-prostaglandin F(2alpha) (8-iso-PGF(2alpha)), a biomarker of lipid peroxidation. Because urine contains numerous F(2) prostaglandin isomers, each with identical mass and similar mass spectrometric fragmentation patterns, chromatographic separation of 8-iso-PGF(2alpha) from its isomers is necessary for its quantitative analysis using MS/MS. We were able to achieve this separation using an isocratic LC method with a run time of less than 9min, which is at least threefold faster than previous methods, while maintaining sensitivity, accuracy, precision, and reliability. The limits of detection and quantitation were 53 and 178pg/ml urine, respectively. We compared our method with a commercially available affinity purification and enzyme immunoassay kit and found both assays to be in agreement. Despite the high sensitivity of the enzyme immunoassay method, it is more expensive and has a narrower dynamic range than LC-MS/MS. Our method was optimized for rapid measurement of 8-iso-PGF(2alpha) in urine, and it is ideally suited for clinical sample analysis. 2010 Elsevier Inc. All rights reserved.
Temperature sensitivity of a numerical pollen forecast model
NASA Astrophysics Data System (ADS)
Scheifinger, Helfried; Meran, Ingrid; Szabo, Barbara; Gallaun, Heinz; Natali, Stefano; Mantovani, Simone
2016-04-01
Allergic rhinitis has become a global health problem especially affecting children and adolescence. Timely and reliable warning before an increase of the atmospheric pollen concentration means a substantial support for physicians and allergy suffers. Recently developed numerical pollen forecast models have become means to support the pollen forecast service, which however still require refinement. One of the problem areas concerns the correct timing of the beginning and end of the flowering period of the species under consideration, which is identical with the period of possible pollen emission. Both are governed essentially by the temperature accumulated before the entry of flowering and during flowering. Phenological models are sensitive to a bias of the temperature. A mean bias of -1°C of the input temperature can shift the entry date of a phenological phase for about a week into the future. A bias of such an order of magnitude is still possible in case of numerical weather forecast models. If the assimilation of additional temperature information (e.g. ground measurements as well as satellite-retrieved air / surface temperature fields) is able to reduce such systematic temperature deviations, the precision of the timing of phenological entry dates might be enhanced. With a number of sensitivity experiments the effect of a possible temperature bias on the modelled phenology and the pollen concentration in the atmosphere is determined. The actual bias of the ECMWF IFS 2 m temperature will also be calculated and its effect on the numerical pollen forecast procedure presented.
Martinek, Janna; Wendelin, Timothy; Ma, Zhiwen
2018-04-05
Concentrating solar power (CSP) plants can provide dispatchable power with a thermal energy storage capability for increased renewable-energy grid penetration. Particle-based CSP systems permit higher temperatures, and thus, potentially higher solar-to-electric efficiency than state-of-the-art molten-salt heat-transfer systems. This paper describes a detailed numerical analysis framework for estimating the performance of a novel, geometrically complex, enclosed particle receiver design. The receiver configuration uses arrays of small tubular absorbers to collect and subsequently transfer solar energy to a flowing particulate medium. The enclosed nature of the receiver design renders it amenable to either an inert heat-transfer medium, or a reactive heat-transfer medium that requires a controllable ambient environment. The numerical analysis framework described in this study is demonstrated for the case of thermal reduction of CaCr 0.1Mn 0.9O 3-more » $$\\delta$$ for thermochemical energy storage. The modeling strategy consists of Monte Carlo ray tracing for absorbed solar-energy distributions from a surround heliostat field, computational fluid dynamics modeling of small-scale local tubular arrays, surrogate response surfaces that approximately capture simulated tubular array performance, a quasi-two-dimensional reduced-order description of counter-flow reactive solids and purge gas, and a radiative exchange model applied to embedded-cavity structures at the size scale of the full receiver. In this work we apply the numerical analysis strategy to a single receiver configuration, but the framework can be generically applicable to alternative enclosed designs. In conclusion, we assess sensitivity of receiver performance to surface optical properties, heat-transfer coefficients, solids outlet temperature, and purge-gas feed rates, and discuss the significance of model assumptions and results for future receiver development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinek, Janna; Wendelin, Timothy; Ma, Zhiwen
Concentrating solar power (CSP) plants can provide dispatchable power with a thermal energy storage capability for increased renewable-energy grid penetration. Particle-based CSP systems permit higher temperatures, and thus, potentially higher solar-to-electric efficiency than state-of-the-art molten-salt heat-transfer systems. This paper describes a detailed numerical analysis framework for estimating the performance of a novel, geometrically complex, enclosed particle receiver design. The receiver configuration uses arrays of small tubular absorbers to collect and subsequently transfer solar energy to a flowing particulate medium. The enclosed nature of the receiver design renders it amenable to either an inert heat-transfer medium, or a reactive heat-transfer medium that requires a controllable ambient environment. The numerical analysis framework described in this study is demonstrated for the case of thermal reduction of CaCr 0.1Mn 0.9O 3-more » $$\\delta$$ for thermochemical energy storage. The modeling strategy consists of Monte Carlo ray tracing for absorbed solar-energy distributions from a surround heliostat field, computational fluid dynamics modeling of small-scale local tubular arrays, surrogate response surfaces that approximately capture simulated tubular array performance, a quasi-two-dimensional reduced-order description of counter-flow reactive solids and purge gas, and a radiative exchange model applied to embedded-cavity structures at the size scale of the full receiver. In this work we apply the numerical analysis strategy to a single receiver configuration, but the framework can be generically applicable to alternative enclosed designs. In conclusion, we assess sensitivity of receiver performance to surface optical properties, heat-transfer coefficients, solids outlet temperature, and purge-gas feed rates, and discuss the significance of model assumptions and results for future receiver development.« less
Mei, ShuYa; Jin, ShuQing; Chen, ZhiXia; Ding, XiBing; Zhao, Xiang; Li, Quan
2015-09-01
Patients frequently experience postoperative pain after a total knee arthroplasty; such pain is always challenging to treat and may delay the patient's recovery. It is unclear whether local infiltration or a femoral nerve block offers a better analgesic effect after total knee arthroplasty.We performed a systematic review and meta-analysis of randomized controlled trials to compare local infiltration with a femoral nerve block in patients who underwent a primary unilateral total knee arthroplasty. We searched Pubmed, EMBASE, and the Cochrane Library through December 2014. Two reviewers scanned abstracts and extracted data. The data collected included numeric rating scale values for pain at rest and pain upon movement and opioid consumption in the first 24 hours. Mean differences with 95% confidence intervals were calculated for each end point. A sensitivity analysis was conducted to evaluate potential sources of heterogeneity.While the numeric rating scale values for pain upon movement (MD-0.62; 95%CI: -1.13 to -0.12; p=0.02) in the first 24 hours differed significantly between the patients who received local infiltration and those who received a femoral nerve block, there were no differences in the numeric rating scale results for pain at rest (MD-0.42; 95%CI:-1.32 to 0.47; p=0.35) or opioid consumption (MD 2.92; 95%CI:-1.32 to 7.16; p=0.18) in the first 24 hours.Local infiltration and femoral nerve block showed no significant differences in pain intensity at rest or opioid consumption after total knee arthroplasty, but the femoral nerve block was associated with reduced pain upon movement.
Mei, ShuYa; Jin, ShuQing; Chen, ZhiXia; Ding, XiBing; Zhao, Xiang; Li, Quan
2015-01-01
Patients frequently experience postoperative pain after a total knee arthroplasty; such pain is always challenging to treat and may delay the patient's recovery. It is unclear whether local infiltration or a femoral nerve block offers a better analgesic effect after total knee arthroplasty. We performed a systematic review and meta-analysis of randomized controlled trials to compare local infiltration with a femoral nerve block in patients who underwent a primary unilateral total knee arthroplasty. We searched Pubmed, EMBASE, and the Cochrane Library through December 2014. Two reviewers scanned abstracts and extracted data. The data collected included numeric rating scale values for pain at rest and pain upon movement and opioid consumption in the first 24 hours. Mean differences with 95% confidence intervals were calculated for each end point. A sensitivity analysis was conducted to evaluate potential sources of heterogeneity. While the numeric rating scale values for pain upon movement (MD-0.62; 95%CI: -1.13 to -0.12; p=0.02) in the first 24 hours differed significantly between the patients who received local infiltration and those who received a femoral nerve block, there were no differences in the numeric rating scale results for pain at rest (MD-0.42; 95%CI:-1.32 to 0.47; p=0.35) or opioid consumption (MD 2.92; 95%CI:-1.32 to 7.16; p=0.18) in the first 24 hours. Local infiltration and femoral nerve block showed no significant differences in pain intensity at rest or opioid consumption after total knee arthroplasty, but the femoral nerve block was associated with reduced pain upon movement. PMID:26375568
Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br
2015-01-15
Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less
Lorkova, Lucie; Scigelova, Michaela; Arrey, Tabiwang Ndipanquang; Vit, Ondrej; Pospisilova, Jana; Doktorova, Eliska; Klanova, Magdalena; Alam, Mahmudul; Vockova, Petra; Maswabi, Bokang
2015-01-01
Mantle cell lymphoma (MCL) is a chronically relapsing aggressive type of B-cell non-Hodgkin lymphoma considered incurable by currently used treatment approaches. Fludarabine is a purine analog clinically still widely used in the therapy of relapsed MCL. Molecular mechanisms of fludarabine resistance have not, however, been studied in the setting of MCL so far. We therefore derived fludarabine-resistant MCL cells (Mino/FR) and performed their detailed functional and proteomic characterization compared to the original fludarabine sensitive cells (Mino). We demonstrated that Mino/FR were highly cross-resistant to other antinucleosides (cytarabine, cladribine, gemcitabine) and to an inhibitor of Bruton tyrosine kinase (BTK) ibrutinib. Sensitivity to other types of anti-lymphoma agents was altered only mildly (methotrexate, doxorubicin, bortezomib) or remained unaffacted (cisplatin, bendamustine). The detailed proteomic analysis of Mino/FR compared to Mino cells unveiled over 300 differentially expressed proteins. Mino/FR were characterized by the marked downregulation of deoxycytidine kinase (dCK) and BTK (thus explaining the observed crossresistance to antinucleosides and ibrutinib), but also by the upregulation of several enzymes of de novo nucleotide synthesis, as well as the up-regulation of the numerous proteins of DNA repair and replication. The significant upregulation of the key antiapoptotic protein Bcl-2 in Mino/FR cells was associated with the markedly increased sensitivity of the fludarabine-resistant MCL cells to Bcl-2-specific inhibitor ABT199 compared to fludarabine-sensitive cells. Our data thus demonstrate that a detailed molecular analysis of drug-resistant tumor cells can indeed open a way to personalized therapy of resistant malignancies. PMID:26285204
NASA Astrophysics Data System (ADS)
Yang, Liang-Yi; Sun, Di-Hua; Zhao, Min; Cheng, Sen-Lin; Zhang, Geng; Liu, Hui
2018-03-01
In this paper, a new micro-cooperative driving car-following model is proposed to investigate the effect of continuous historical velocity difference information on traffic stability. The linear stability criterion of the new model is derived with linear stability theory and the results show that the unstable region in the headway-sensitivity space will be shrunk by taking the continuous historical velocity difference information into account. Through nonlinear analysis, the mKdV equation is derived to describe the traffic evolution behavior of the new model near the critical point. Via numerical simulations, the theoretical analysis results are verified and the results indicate that the continuous historical velocity difference information can enhance the stability of traffic flow in the micro-cooperative driving process.
Simulation of probabilistic wind loads and building analysis
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Probabilistic wind loads likely to occur on a structure during its design life are predicted. Described here is a suitable multifactor interactive equation (MFIE) model and its use in the Composite Load Spectra (CLS) computer program to simulate the wind pressure cumulative distribution functions on four sides of a building. The simulated probabilistic wind pressure load was applied to a building frame, and cumulative distribution functions of sway displacements and reliability against overturning were obtained using NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), a stochastic finite element computer code. The geometry of the building and the properties of building members were also considered as random in the NESSUS analysis. The uncertainties of wind pressure, building geometry, and member section property were qualified in terms of their respective sensitivities on the structural response.
Numerical Simulation and Mechanical Design for TPS Electron Beam Position Monitors
NASA Astrophysics Data System (ADS)
Hsueh, H. P.; Kuan, C. K.; Ueng, T. S.; Hsiung, G. Y.; Chen, J. R.
2007-01-01
Comprehensive study on the mechanical design and numerical simulation for the high resolution electron beam position monitors are key steps to build the newly proposed 3rd generation synchrotron radiation research facility, Taiwan Photon Source (TPS). With more advanced electromagnetic simulation tool like MAFIA tailored specifically for particle accelerator, the design for the high resolution electron beam position monitors can be tested in such environment before they are experimentally tested. The design goal of our high resolution electron beam position monitors is to get the best resolution through sensitivity and signal optimization. The definitions and differences between resolution and sensitivity of electron beam position monitors will be explained. The design consideration is also explained. Prototype deign has been carried out and the related simulations were also carried out with MAFIA. The results are presented here. Sensitivity as high as 200 in x direction has been achieved in x direction at 500 MHz.
Preconditioned conjugate gradient technique for the analysis of symmetric anisotropic structures
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1987-01-01
An efficient preconditioned conjugate gradient (PCG) technique and a computational procedure are presented for the analysis of symmetric anisotropic structures. The technique is based on selecting the preconditioning matrix as the orthotropic part of the global stiffness matrix of the structure, with all the nonorthotropic terms set equal to zero. This particular choice of the preconditioning matrix results in reducing the size of the analysis model of the anisotropic structure to that of the corresponding orthotropic structure. The similarities between the proposed PCG technique and a reduction technique previously presented by the authors are identified and exploited to generate from the PCG technique direct measures for the sensitivity of the different response quantities to the nonorthotropic (anisotropic) material coefficients of the structure. The effectiveness of the PCG technique is demonstrated by means of a numerical example of an anisotropic cylindrical panel.
Sensitivity Analysis of Expected Wind Extremes over the Northwestern Sahara and High Atlas Region.
NASA Astrophysics Data System (ADS)
Garcia-Bustamante, E.; González-Rouco, F. J.; Navarro, J.
2017-12-01
A robust statistical framework in the scientific literature allows for the estimation of probabilities of occurrence of severe wind speeds and wind gusts, but does not prevent however from large uncertainties associated with the particular numerical estimates. An analysis of such uncertainties is thus required. A large portion of this uncertainty arises from the fact that historical observations are inherently shorter that the timescales of interest for the analysis of return periods. Additional uncertainties stem from the different choices of probability distributions and other aspects related to methodological issues or physical processes involved. The present study is focused on historical observations over the Ouarzazate Valley (Morocco) and in a high-resolution regional simulation of the wind in the area of interest. The aim is to provide extreme wind speed and wind gust return values and confidence ranges based on a systematic sampling of the uncertainty space for return periods up to 120 years.
Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H
2006-01-01
Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.