Optimum sensitivity derivatives of objective functions in nonlinear programming
NASA Technical Reports Server (NTRS)
Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.
1983-01-01
The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
NASA Technical Reports Server (NTRS)
Hou, Gene
1998-01-01
Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacuci, Dan G.; Favorite, Jeffrey A.
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
Cacuci, Dan G.; Favorite, Jeffrey A.
2018-04-06
This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
2D Decision-Making for Multi-Criteria Design Optimization
2006-05-01
participating in the same subproblem, information on the tradeoffs between different subproblems is obtained from a sensitivity analysis and used for...accomplished by some other mechanism. For the coordination between subproblem, we use the lexicographical ordering approach for multicriteria ...Sensitivity analysis Our approach uses sensitivity results from nonlinear programming (Fiacco, 1983; Luenberger, 2003), for which we first
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Hou, Gene J. W.
1994-01-01
A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.
Ellemberg, D; Lewis, T L; Maurer, D; Lee, B; Ledgeway, T; Guilemot, J P; Lepore, F
2010-01-01
We compared the development of sensitivity to first- versus second-order global motion in 5-year-olds (n=24) and adults (n=24) tested at three displacements (0.1, 0.5 and 1.0 degrees). Sensitivity was measured with Random-Gabor Kinematograms (RGKs) formed with luminance-modulated (first-order) or contrast-modulated (second-order) concentric Gabor patterns. Five-year-olds were less sensitive than adults to the direction of both first- and second-order global motion at every displacement tested. In addition, the immaturity was smallest at the smallest displacement, which required the least spatial integration, and smaller for first-order than for second-order global motion at the middle displacement. The findings suggest that the development of sensitivity to global motion is limited by the development of spatial integration and by different rates of development of sensitivity to first- versus second-order signals.
Kinematic sensitivity of robot manipulators
NASA Technical Reports Server (NTRS)
Vuskovic, Marko I.
1989-01-01
Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Teague, Heather; Ross, Ron; Harris, Mitchel; Mitchell, Drake C.; Shaikh, Saame Raza
2012-01-01
Docosahexaenoic acid (DHA) disrupts the size and order of plasma membrane lipid microdomains in vitro and in vivo. However, it is unknown how the highly disordered structure of DHA mechanistically adapts to increase the order of tightly packed lipid microdomains. Therefore, we studied a novel DHA-Bodipy fluorescent probe to address this issue. We first determined if the DHA-Bodipy probe localized to the plasma membrane of primary B and immortal EL4 cells. Image analysis revealed that DHA-Bodipy localized into the plasma membrane of primary B cells more efficiently than EL4 cells. We then determined if the probe detected changes in plasma membrane order. Quantitative analysis of time-lapse movies established that DHA-Bodipy was sensitive to membrane molecular order. This allowed us to investigate how DHA-Bodipy physically adapted to ordered lipid microdomains. To accomplish this, we employed steady-state and time-resolved fluorescence anisotropy measurements in lipid vesicles of varying composition. Similar to cell culture studies, the probe was highly sensitive to membrane order in lipid vesicles. Moreover, these experiments revealed, relative to controls, that upon incorporation into highly ordered microdomains, DHA-Bodipy underwent an increase in its fluorescence lifetime and molecular order. In addition, the probe displayed a significant reduction in its rotational diffusion compared to controls. Altogether, DHA-Bodipy was highly sensitive to membrane order and revealed for the first time that DHA, despite its flexibility, could become ordered with less rotational motion inside ordered lipid microdomains. Mechanistically, this explains how DHA acyl chains can increase order upon formation of lipid microdomains in vivo. PMID:22841541
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
NASA Astrophysics Data System (ADS)
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eno, L.; Rabitz, H.
1981-08-15
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less
First and Higher Order Effects on Zero Order Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2014-12-01
Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach
NASA Astrophysics Data System (ADS)
Aguilar, José G.; Magri, Luca; Juniper, Matthew P.
2017-07-01
Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.
On the sensitivity analysis of porous material models
NASA Astrophysics Data System (ADS)
Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel
2012-11-01
Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Practical considerations for a second-order directional hearing aid microphone system
NASA Astrophysics Data System (ADS)
Thompson, Stephen C.
2003-04-01
First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
First measurement of the vector analyzing power in muon capture by polarized muonic {sup 3}He
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cummings, W.J.; Behr, J.; Bogorad, P.
1995-09-01
This paper describes the first measurement of spin observables in nuclear muon capture by {sup 3}He. The sensitivity of spin observables to the pseudoscalar coupling is described. The triton asymmetry presented has to be corrected for small systematic effects in order to extract the vector analyzing power. The analysis of these effects is currently underway.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
[Analysis and experimental verification of sensitivity and SNR of laser warning receiver].
Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue
2009-01-01
In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.
Effect of train carbody's parameters on vertical bending stiffness performance
NASA Astrophysics Data System (ADS)
Yang, Guangwu; Wang, Changke; Xiang, Futeng; Xiao, Shoune
2016-10-01
Finite element analysis(FEA) and modal test are main methods to give the first-order vertical bending vibration frequency of train carbody at present, but they are inefficiency and waste plenty of time. Based on Timoshenko beam theory, the bending deformation, moment of inertia and shear deformation are considered. Carbody is divided into some parts with the same length, and it's stiffness is calculated with series principle, it's cross section area, moment of inertia and shear shape coefficient is equivalent by segment length, and the fimal corrected first-order vertical bending vibration frequency analytical formula is deduced. There are 6 simple carbodies and 1 real carbody as examples to test the formula, all analysis frequencies are very close to their FEA frequencies, and especially for the real carbody, the error between analysis and experiment frequency is 0.75%. Based on the analytic formula, sensitivity analysis of the real carbody's design parameters is done, and some main parameters are found. The series principle of carbody stiffness is introduced into Timoshenko beam theory to deduce a formula, which can estimate the first-order vertical bending vibration frequency of carbody quickly without traditional FEA method and provide a reference to design engineers.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Sensitivity of STIS First-OrderMedium Resolution Modes
NASA Astrophysics Data System (ADS)
Proffitt, Charles R.
2006-07-01
The sensitivities for STIS first-order medium resolution modes were redetermined usingon-orbit observations of the standard DA white dwarfs G 191-B2B, GD 71, and GD 153.We review the procedures and assumptions used to derive the adopted throughputs, and discuss the remaining errors and uncertainties.
Stochastic sensitivity measure for mistuned high-performance turbines
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe
1992-01-01
A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Cascaded Amplitude Modulations in Sound Texture Perception
McWalter, Richard; Dau, Torsten
2017-01-01
Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches. PMID:28955191
Cascaded Amplitude Modulations in Sound Texture Perception.
McWalter, Richard; Dau, Torsten
2017-01-01
Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Analysis of multimode BDK doped POF gratings for temperature sensing
NASA Astrophysics Data System (ADS)
Luo, Yanhua; Wu, Wenxuan; Wang, Tongxin; Cheng, Xusheng; Zhang, Qijin; Peng, Gang-Ding; Zhu, Bing
2012-10-01
We report a temperature sensor based on a Bragg grating written in a benzil dimethyl ketal (BDK) doped multimode (MM) polymer optical fiber (POF) for the first time to our knowledge. The thermal response was further analyzed in view of theory and experiment. In theory, with the order of the reflected mode increasing from 1st to 60th order, for MM silica fiber Bragg grating (FBG) the temperature sensitivity will increase linearly from 16.2 pm/°C to 17.5 pm/°C, while for MM polymer FBG the temperature sensitivity (absolute value) will increase linearly from -79.5 pm/°C to -104.4 pm/°C. In addition, temperature sensitivity of MM polymer FBG exhibits almost 1 order larger mode order dependence than that of MM silica FBG. In experiment, the Bragg wavelength shift will decline linearly as the temperature rises, contrary to that of MM silica FBG. The temperature sensitivity of MM polymer FBG is ranged from -0.097 nm/°C to -0.111 nm/°C, more than 8 times that of MM silica FBG, showing great potential used as a temperature sensor.
The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Age-related changes in perception of movement in driving scenes.
Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M
2014-07-01
Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox
NASA Astrophysics Data System (ADS)
Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.
2016-12-01
VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.
Numerical analysis of the beam position monitor pickup for the Iranian light source facility
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2017-03-01
In this paper, we describe the design of a button type Beam Position Monitor (BPM) for the low emittance storage ring of the Iranian Light Source Facility (ILSF). First, we calculate sensitivities, induced power and intrinsic resolution based on solving Laplace equation numerically by finite element method (FEM), in order to find the potential at each point of BPM's electrode surface. After the optimization of the designed BPM, trapped high order modes (HOM), wakefield and thermal loss effects are calculated. Finally, after fabrication of BPM, it is experimentally tested by using a test-stand. The results depict that the designed BPM has a linear response in the area of 2×4 mm2 inside the beam pipe and the sensitivity of 0.080 and 0.087 mm-1 in horizontal and vertical directions. Experimental results also depict that they are in a good agreement with numerical analysis.
2005-06-06
sapwood area is usually consistent (Pataki et al., 2000; Smith et al., 1995 missing). This relation suggests that larger trees may be susceptible to... area , since leaf- area to sapwood areas are usually consistent. Larger trees along higher order channels, therefore, may prove to be more sensitive to...measurements of bulk soil electrical conductivity to measure soil moisture and possible anthropogenic effects over large areas as long as
Theory of Mind and Sensitivity to Teacher and Peer Criticism among Japanese Children
ERIC Educational Resources Information Center
Mizokawa, Ai
2015-01-01
This study investigated sensitivity to teacher and peer criticism among 89 Japanese 6-year-olds and examined the connection between sensitivity to criticism and first-order and second-order theory of mind separately. Participants completed a common test battery that included tasks assessing sensitivity to criticism (teacher or peer condition), the…
Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu
2016-12-21
A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less
Negative axial strain sensitivity in gold-coated eccentric fiber Bragg gratings
Chah, Karima; Kinet, Damien; Caucheteur, Christophe
2016-01-01
New dual temperature and strain sensor has been designed using eccentric second-order fiber Bragg gratings produced in standard single-mode optical fiber by point-by-point direct writing technique with tight focusing of 800 nm femtosecond laser pulses. With thin gold coating at the grating location, we experimentally show that such gratings exhibit a transmitted amplitude spectrum composed by the Bragg and cladding modes resonances that extend in a wide spectral range exceeding one octave. An overlapping of the first order and second order spectrum is then observed. High-order cladding modes belonging to the first order Bragg resonance coupling are close to the second order Bragg resonance, they show a negative axial strain sensitivity (−0.55 pm/με) compared to the Bragg resonance (1.20 pm/με) and the same temperature sensitivity (10.6 pm/°C). With this well conditioned system, temperature and strain can be determined independently with high sensitivity, in a wavelength range limited to a few nanometers. PMID:27901059
NASA Astrophysics Data System (ADS)
Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying
2016-11-01
The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.
Sensitivity of high-elevation streams in the Southern Blue Ridge Province to acidic deposition
Winger, P.V.; Lasier, P.J.; Hudy, M.; Fowler, D.; Van Den Avyle, M.J.
1987-01-01
The Southern Blue Ridge Province, which encompasses parts of northern Georgia, eastern Tennessee, and western North Carolina, has been predicted to be sensitive to impacts from acidic deposition, owing to the chemical composition of the bedrock geology and soils. This study confirms the predicted potential sensitivity, quantifies the level of total alkalinity and describes the chemical characteristics of 30 headwater streams of this area. Water chemistry was measured five times between April 1983 and June 1984 at first and third order reaches of each stream during baseflow conditions. Sensitivity based on total alkalinity and the Calcite Saturation Index indicates that the headwater streams of the Province are vulnerable to acidification. Total alkalinity and p11 were generally higher in third order reaches (mean, 72 ?eq/? and 6.7) than in first order reaches (64 ?eq/? and 6.4). Ionic concentrations were low, averaging 310 and 340 ?eq/? in first and third order reaches, respectively. A single sampling appears adequate for evaluating sensitivity based on total alkalinity, but large temporal variability requires multiple sampling for the detection of changes in pH and alkalinity over time. Monitoring of stream water should continue in order to detect any subtle effects of acidic deposition on these unique resource systems.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Cournane, S; Sheehy, N; Cooke, J
2014-06-01
Benford's law is an empirical observation which predicts the expected frequency of digits in naturally occurring datasets spanning multiple orders of magnitude, with the law having been most successfully applied as an audit tool in accountancy. This study investigated the sensitivity of the technique in identifying system output changes using simulated changes in interventional radiology Dose-Area-Product (DAP) data, with any deviations from Benford's distribution identified using z-statistics. The radiation output for interventional radiology X-ray equipment is monitored annually during quality control testing; however, for a considerable portion of the year an increased output of the system, potentially caused by engineering adjustments or spontaneous system faults may go unnoticed, leading to a potential increase in the radiation dose to patients. In normal operation recorded examination radiation outputs vary over multiple orders of magnitude rendering the application of normal statistics ineffective for detecting systematic changes in the output. In this work, the annual DAP datasets complied with Benford's first order law for first, second and combinations of the first and second digits. Further, a continuous 'rolling' second order technique was devised for trending simulated changes over shorter timescales. This distribution analysis, the first employment of the method for radiation output trending, detected significant changes simulated on the original data, proving the technique useful in this case. The potential is demonstrated for implementation of this novel analysis for monitoring and identifying change in suitable datasets for the purpose of system process control. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Pavlovian second-order conditioned analgesia.
Ross, R T
1986-01-01
Three experiments with rat subjects assessed conditioned analgesia in a Pavlovian second-order conditioning procedure by using inhibition of responding to thermal stimulation as an index of pain sensitivity. In Experiment 1, rats receiving second-order conditioning showed longer response latencies during a test of pain sensitivity in the presence of the second-order conditioned stimulus (CS) than rats receiving appropriate control procedures. Experiment 2 found that extinction of the first-order CS had no effect on established second-order conditioned analgesia. Experiment 3 evaluated the effects of post second-order conditioning pairings of morphine and the shock unconditioned stimulus (US). Rats receiving paired morphine-shock presentations showed significantly shorter response latencies during a hot-plate test of pain sensitivity in the presence of the second-order CS than did groups of rats receiving various control procedures; second-order analgesia was attenuated. These data extend the associative account of conditioned analgesia to second-order conditioning situations and are discussed in terms of the mediation of both first- and second-order analgesia by an association between the CS and a representation or expectancy of the US, which may directly activate endogenous pain inhibition systems.
Xu, Li; Jiang, Yong; Qiu, Rong
2018-01-01
In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Dynamic analysis of process reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadle, L.J.; Lawson, L.O.; Noel, S.D.
1995-06-01
The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
Hestekin, Christa N.; Lin, Jennifer S.; Senderowicz, Lionel; Jakupciak, John P.; O’Connell, Catherine; Rademaker, Alfred; Barron, Annelise E.
2012-01-01
Knowledge of the genetic changes that lead to disease has grown and continues to grow at a rapid pace. However, there is a need for clinical devices that can be used routinely to translate this knowledge into the treatment of patients. Use in a clinical setting requires high sensitivity and specificity (>97%) in order to prevent misdiagnoses. Single strand conformational polymorphism (SSCP) and heteroduplex analysis (HA) are two DNA-based, complementary methods for mutation detection that are inexpensive and relatively easy to implement. However, both methods are most commonly detected by slab gel electrophoresis, which can be labor-intensive, time-consuming, and often the methods are unable to produce high sensitivity and specificity without the use of multiple analysis conditions. Here we demonstrate the first blinded study using microchip electrophoresis-SSCP/HA. We demonstrate the ability of microchip electrophoresis-SSCP/HA to detect with 98% sensitivity and specificity >100 samples from the p53 gene exons 5–9 in a blinded study in an analysis time of less than 10 minutes. PMID:22002021
Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.
1992-01-01
An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.
An EEG should not be obtained routinely after first unprovoked seizure in childhood.
Gilbert, D L; Buncher, C R
2000-02-08
To quantify and analyze the value of expected information from an EEG after first unprovoked seizure in childhood. An EEG is often recommended as part of the standard diagnostic evaluation after first seizure. A MEDLINE search from 1980 to 1998 was performed. From eligible studies, data on EEG results and seizure recurrence risk in children were abstracted, and sensitivity, specificity, and positive and negative predictive values of EEG in predicting recurrence were calculated. Linear information theory was used to quantify and compare the expected information from the EEG in all studies. Standard test-treat decision analysis with a treatment threshold at 80% recurrence risk was used to determine the range of pretest recurrence probabilities over which testing affects treatment decisions. Four studies involving 831 children were eligible for analysis. At best, the EEG had a sensitivity of 61%, a specificity of 71%, and an expected information of 0.16 out of a possible 0.50. The pretest probability of recurrence was less than the lower limit of the range for rational testing in all studies. In this analysis, the quantity of expected information from the EEG was too low to affect treatment recommendations in most patients. EEG should be ordered selectively, not routinely, after first unprovoked seizure in childhood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinthavali, Madhu Sudhan; Wang, Zhiqiang
This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less
NASA Astrophysics Data System (ADS)
Siadaty, Moein; Kazazi, Mohsen
2018-04-01
Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.
Rosenbaum, Paul R
2016-03-01
A common practice with ordered doses of treatment and ordered responses, perhaps recorded in a contingency table with ordered rows and columns, is to cut or remove a cross from the table, leaving the outer corners--that is, the high-versus-low dose, high-versus-low response corners--and from these corners to compute a risk or odds ratio. This little remarked but common practice seems to be motivated by the oldest and most familiar method of sensitivity analysis in observational studies, proposed by Cornfield et al. (1959), which says that to explain a population risk ratio purely as bias from an unobserved binary covariate, the prevalence ratio of the covariate must exceed the risk ratio. Quite often, the largest risk ratio, hence the one least sensitive to bias by this standard, is derived from the corners of the ordered table with the central cross removed. Obviously, the corners use only a portion of the data, so a focus on the corners has consequences for the standard error as well as for bias, but sampling variability was not a consideration in this early and familiar form of sensitivity analysis, where point estimates replaced population parameters. Here, this cross-cut analysis is examined with the aid of design sensitivity and the power of a sensitivity analysis. © 2015, The International Biometric Society.
Wu, Chiu-Hsien; Jiang, Guo-Jhen; Chang, Kai-Wei; Deng, Zu-Yin; Li, Yu-Ning; Chen, Kuen-Lin; Jeng, Chien-Chung
2018-01-09
In this study, the sensing properties of an amorphous indium gallium zinc oxide (a-IGZO) thin film at ozone concentrations from 500 to 5 ppm were investigated. The a-IGZO thin film showed very good reproducibility and stability over three test cycles. The ozone concentration of 60-70 ppb also showed a good response. The resistance change (Δ R ) and sensitivity ( S ) were linearly dependent on the ozone concentration. The response time ( T 90-res ), recovery time ( T 90-rec ), and time constant (τ) showed first-order exponential decay with increasing ozone concentration. The resistance-time curve shows that the maximum resistance change rate (dRg/dt) is proportional to the ozone concentration during the adsorption. The results also show that it is better to sense rapidly and stably at a low ozone concentration using a high light intensity. The ozone concentration can be derived from the resistance change, sensitivity, response time, time constant (τ), and first derivative function of resistance. However, the time of the first derivative function of resistance is shorter than other parameters. The results show that a-IGZO thin films and the first-order differentiation method are promising candidates for use as ozone sensors for practical applications.
Wu, Chiu-Hsien; Jiang, Guo-Jhen; Chang, Kai-Wei; Deng, Zu-Yin; Li, Yu-Ning; Chen, Kuen-Lin; Jeng, Chien-Chung
2018-01-01
In this study, the sensing properties of an amorphous indium gallium zinc oxide (a-IGZO) thin film at ozone concentrations from 500 to 5 ppm were investigated. The a-IGZO thin film showed very good reproducibility and stability over three test cycles. The ozone concentration of 60–70 ppb also showed a good response. The resistance change (ΔR) and sensitivity (S) were linearly dependent on the ozone concentration. The response time (T90-res), recovery time (T90-rec), and time constant (τ) showed first-order exponential decay with increasing ozone concentration. The resistance–time curve shows that the maximum resistance change rate (dRg/dt) is proportional to the ozone concentration during the adsorption. The results also show that it is better to sense rapidly and stably at a low ozone concentration using a high light intensity. The ozone concentration can be derived from the resistance change, sensitivity, response time, time constant (τ), and first derivative function of resistance. However, the time of the first derivative function of resistance is shorter than other parameters. The results show that a-IGZO thin films and the first-order differentiation method are promising candidates for use as ozone sensors for practical applications. PMID:29315218
2010-01-01
Background Susceptibility to atopy originates from effects of the environment on genes. Birth order has been identified as a risk factor for atopy and evidence for some candidate genes has been accumulated; however no study has yet assessed a birth order-gene interaction. Objective To investigate the interaction of IL13 polymorphisms with birth order on allergic sensitization at ages 4, 10 and 18 years. Methods Mother-infant dyads were recruited antenatally and followed prospectively to age 18 years. Questionnaire data (at birth, age 4, 10, 18); skin prick test (SPT) at ages 4, 10, 18; total serum IgE and specific inhalant screen at age 10; and genotyping for IL13 were collected. Three SNPs were selected from IL13: rs20541 (exon 4, nonsynonymous SNP), rs1800925 (promoter region) and rs2066960 (intron 1). Analysis included multivariable log-linear regression analyses using repeated measurements to estimate prevalence ratios (PRs). Results Of the 1456 participants, birth order information was available for 83.2% (1212/1456); SPT was performed on 67.4% at age 4, 71.2% at age 10 and 58.0% at age 18. The prevalence of atopy (sensitization to one or more food or aeroallergens) increased from 19.7% at age 4, to 26.7% at 10 and 41.1% at age 18. Repeated measurement analysis indicated interaction between rs20541 and birth order on SPT. The stratified analyses demonstrated that the effect of IL13 on SPT was restricted only to first-born children (p = 0.007; adjusted PR = 1.35; 95%CI = 1.09, 1.69). Similar findings were noted for firstborns regarding elevated total serum IgE at age 10 (p = 0.007; PR = 1.73; 1.16, 2.57) and specific inhalant screen (p = 0.034; PR = 1.48; 1.03, 2.13). Conclusions This is the first study to show an interaction between birth order and IL13 polymorphisms on allergic sensitization. Future functional genetic research need to determine whether or not birth order is related to altered expression and methylation of the IL13 gene. PMID:20403202
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Jin, Ling; Tonse, Shaheen; Cohan, Daniel S; Mao, Xiaoling; Harley, Robert A; Brown, Nancy J
2008-05-15
We developed a first- and second-order sensitivity analysis approach with the decoupled direct method to examine spatial and temporal variations of ozone-limiting reagents and the importance of local vs upwind emission sources in the San Joaquin Valley of central California for a 5 day ozone episode (Jul 29th to Aug 3rd, 2000). Despite considerable spatial variations, nitrogen oxides (NO(x)) emission reductions are overall more effective than volatile organic compound (VOC) control for attaining the 8 h ozone standard in this region for this episode, in contrast to the VOC control that works better for attaining the prior 1 h ozone standard. Interbasin source contributions of NO(x) emissions are limited to the northern part of the SJV, while anthropogenic VOC (AVOC) emissions, especially those emitted at night, influence ozone formation in the SJV further downwind. Among model input parameters studied here, uncertainties in emissions of NO(x) and AVOC, and the rate coefficient of the OH + NO2 termination reaction, have the greatest effect on first-order ozone responses to changes in NO(x) emissions. Uncertainties in biogenic VOC emissions only have a modest effect because they are generally not collocated with anthropogenic sources in this region.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less
Gao, Yang; Li, Hongsheng; Huang, Libin; Sun, Hui
2017-04-30
This paper presents the design and application of a lever coupling mechanism to improve the shock resistance of a dual-mass silicon micro-gyroscope with drive mode coupled along the driving direction without sacrificing the mechanical sensitivity. Firstly, the mechanical sensitivity and the shock response of the micro-gyroscope are theoretically analyzed. In the mechanical design, a novel lever coupling mechanism is proposed to change the modal order and to improve the frequency separation. The micro-gyroscope with the lever coupling mechanism optimizes the drive mode order, increasing the in-phase mode frequency to be much larger than the anti-phase one. Shock analysis results show that the micro-gyroscope structure with the designed lever coupling mechanism can notably reduce the magnitudes of the shock response and cut down the stress produced in the shock process compared with the traditional elastic coupled one. Simulations reveal that the shock resistance along the drive direction is greatly increased. Consequently, the lever coupling mechanism can change the gyroscope's modal order and improve the frequency separation by structurally offering a higher stiffness difference ratio. The shock resistance along the driving direction is tremendously enhanced without loss of the mechanical sensitivity.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza
2016-05-01
Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2018-01-01
Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.
Collagen morphology and texture analysis: from statistics to classification
Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.
2013-01-01
In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Hestekin, Christa N; Lin, Jennifer S; Senderowicz, Lionel; Jakupciak, John P; O'Connell, Catherine; Rademaker, Alfred; Barron, Annelise E
2011-11-01
Knowledge of the genetic changes that lead to disease has grown and continues to grow at a rapid pace. However, there is a need for clinical devices that can be used routinely to translate this knowledge into the treatment of patients. Use in a clinical setting requires high sensitivity and specificity (>97%) in order to prevent misdiagnoses. Single-strand conformational polymorphism (SSCP) and heteroduplex analysis (HA) are two DNA-based, complementary methods for mutation detection that are inexpensive and relatively easy to implement. However, both methods are most commonly detected by slab gel electrophoresis, which can be labor-intensive, time-consuming, and often the methods are unable to produce high sensitivity and specificity without the use of multiple analysis conditions. Here, we demonstrate the first blinded study using microchip electrophoresis (ME)-SSCP/HA. We demonstrate the ability of ME-SSCP/HA to detect with 98% sensitivity and specificity >100 samples from the p53 gene exons 5-9 in a blinded study in an analysis time of <10 min. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
In situ synchrotron XRD analysis of the kinetics of spodumene phase transitions.
L Moore, Radhika; Mann, Jason P; Montoya, Alejandro; Haynes, Brian S
2018-04-25
The phase transition by thermal activation of natural α-spodumene was followed by in situ synchrotron XRD in the temperature range 896 to 940 °C. We observed both β- and γ-spodumene as primary products in approximately equal proportions. The rate of the α-spodumene inversion is first order and highly sensitive to temperature (apparent activation energy ∼800 kJ mol-1). The γ-spodumene product is itself metastable, forming β-spodumene, with the total product mass fraction ratio fγ/fβ decreasing as the conversion of α-spodumene continues. We found the relationship between the product yields and the degree of conversion of α-spodumene to be the same at all temperatures in the range studied. A model incorporating first order kinetics of the α- and γ-phase inversions with invariant rate constant ratio describes the results accurately. Theoretical phonon analysis of the three phases indicates that the γ phase contains crystallographic instabilities, whilst the α and β phases do not.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
Analysis of Composite Panels Subjected to Thermo-Mechanical Loads
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1999-01-01
The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.
Zhou, Jinjun; Huang, Haiping; Xuan, Jie; Zhang, Jianrong; Zhu, Jun-Jie
2010-10-15
A sensitive electrochemical aptasensor was successfully fabricated for the detection of adenosine triphosphate (ATP) by combining three-dimensionally ordered macroporous (3DOM) gold film and quantum dots (QDs). The 3DOM gold film was electrochemically fabricated with an inverted opal template, making the active surface area of the electrode up to 9.52 times larger than that of a classical bare flat one. 5′-thiolated ATP-binding aptamer (ABA) was first assembled onto the 3DOM gold film via sulfur–gold affinity. Then, 5′-biotinated complementary strand (BCS) was immobilized via hybridization reaction to form the DNA/DNA duplex. Since the tertiary structure of the aptamer was stabilized in the presence of target ATP, the duplex can be denatured to liberate BCS. The reaction was monitored by electrochemical stripping analysis of dissolved QDs which were bound to the residual BCS through biotin-streptavidin system. The decrease of peak current was proportional to the amount of ATP. The unique interconnected structure in 3DOM gold film along with the "built-in" preconcentration remarkably improved the sensitivity. ATP detection with high selectivity, wide linear dynamic range of 4 orders of magnitude and high sensitivity down to 0.01 nm were achieved. The results demonstrated that the novel strategy was feasible for sensitive ATP assay and provided a promising model for the detection of small molecules.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Optimal frequency-response sensitivity of compressible flow over roughness elements
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter J.
2017-04-01
Compressible flow over a flat plate with two localised and well-separated roughness elements is analysed by global frequency-response analysis. This analysis reveals a sustained feedback loop consisting of a convectively unstable shear-layer instability, triggered at the upstream roughness, and an upstream-propagating acoustic wave, originating at the downstream roughness and regenerating the shear-layer instability at the upstream protrusion. A typical multi-peaked frequency response is recovered from the numerical simulations. In addition, the optimal forcing and response clearly extract the components of this feedback loop and isolate flow regions of pronounced sensitivity and amplification. An efficient parametric-sensitivity framework is introduced and applied to the reference case which shows that first-order increases in Reynolds number and roughness height act destabilising on the flow, while changes in Mach number or roughness separation cause corresponding shifts in the peak frequencies. This information is gained with negligible effort beyond the reference case and can easily be applied to more complex flows.
Probabilistic structural analysis using a general purpose finite element program
NASA Astrophysics Data System (ADS)
Riha, D. S.; Millwater, H. R.; Thacker, B. H.
1992-07-01
This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.
Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak
2015-06-01
Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, R.; Demerdash, N. A.
1990-01-01
The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Gao, Yang; Li, Hongsheng; Huang, Libin; Sun, Hui
2017-01-01
This paper presents the design and application of a lever coupling mechanism to improve the shock resistance of a dual-mass silicon micro-gyroscope with drive mode coupled along the driving direction without sacrificing the mechanical sensitivity. Firstly, the mechanical sensitivity and the shock response of the micro-gyroscope are theoretically analyzed. In the mechanical design, a novel lever coupling mechanism is proposed to change the modal order and to improve the frequency separation. The micro-gyroscope with the lever coupling mechanism optimizes the drive mode order, increasing the in-phase mode frequency to be much larger than the anti-phase one. Shock analysis results show that the micro-gyroscope structure with the designed lever coupling mechanism can notably reduce the magnitudes of the shock response and cut down the stress produced in the shock process compared with the traditional elastic coupled one. Simulations reveal that the shock resistance along the drive direction is greatly increased. Consequently, the lever coupling mechanism can change the gyroscope’s modal order and improve the frequency separation by structurally offering a higher stiffness difference ratio. The shock resistance along the driving direction is tremendously enhanced without loss of the mechanical sensitivity. PMID:28468288
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
NASA Astrophysics Data System (ADS)
Ahmad, J. A.; Forman, B. A.
2017-12-01
High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.
Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space
NASA Technical Reports Server (NTRS)
Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.
2001-01-01
The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Efficiency of unconstrained minimization techniques in nonlinear analysis
NASA Technical Reports Server (NTRS)
Kamat, M. P.; Knight, N. F., Jr.
1978-01-01
Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
Optimal control analysis of malaria-schistosomiasis co-infection dynamics.
Okosun, Kazeem Oare; Smith, Robert
2017-04-01
This paper presents a mathematical model for malaria--schistosomiasis co-infection in order to investigate their synergistic relationship in the presence of treatment. We first analyse the single infection steady states, then investigate the existence and stability of equilibria and then calculate the basic reproduction numbers. Both the single-infection models and the co-infection model exhibit backward bifurcations. We carrying out a sensitivity analysis of the co-infection model and show that schistosomiasis infection may not be associated with an increased risk of malaria. Conversely, malaria infection may be associated with an increased risk of schistosomiasis. Furthermore, we found that effective treatment and prevention of schistosomiasis infection would also assist in the effective control and eradication of malaria. Finally, we apply Pontryagin's Maximum Principle to the model in order to determine optimal strategies for control of both diseases.
Visible and Extended Near-Infrared Multispectral Imaging for Skin Cancer Diagnosis
Rey-Barroso, Laura; Burgos-Fernández, Francisco J.; Delpueyo, Xana; Ares, Miguel; Malvehy, Josep; Puig, Susana
2018-01-01
With the goal of diagnosing skin cancer in an early and noninvasive way, an extended near infrared multispectral imaging system based on an InGaAs sensor with sensitivity from 995 nm to 1613 nm was built to evaluate deeper skin layers thanks to the higher penetration of photons at these wavelengths. The outcomes of this device were combined with those of a previously developed multispectral system that works in the visible and near infrared range (414 nm–995 nm). Both provide spectral and spatial information from skin lesions. A classification method to discriminate between melanomas and nevi was developed based on the analysis of first-order statistics descriptors, principal component analysis, and support vector machine tools. The system provided a sensitivity of 78.6% and a specificity of 84.6%, the latter one being improved with respect to that offered by silicon sensors. PMID:29734747
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
2000-01-01
At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.
Pressure sensitivity of low permeability sandstones
Kilmer, N.H.; Morrow, N.R.; Pitman, Janet K.
1987-01-01
Detailed core analysis has been carried out on 32 tight sandstones with permeabilities ranging over four orders of magnitude (0.0002 to 4.8 mD at 5000 psi confining pressure). Relationships between gas permeability and net confining pressure were measured for cycles of loading and unloading. For some samples, permeabilities were measured both along and across bedding planes. Large variations in stress sensitivity of permeability were observed from one sample to another. The ratio of permeability at a nominal confining pressure of 500 psi to that at 5000 psi was used to define a stress sensitivity ratio. For a given sample, confining pressure vs permeability followed a linear log-log relationship, the slope of which provided an index of pressure sensitivity. This index, as obtained for first unloading data, was used in testing relationships between stress sensitivity and other measured rock properties. Pressure sensitivity tended to increase with increase in carbonate content and depth, and with decrease in porosity, permeability and sodium feldspar. However, scatter in these relationships increased as permeability decreased. Tests for correlations between pressure sensitivity and various linear combinations of variables are reported. Details of pore structure related to diagenetic changes appears to be of much greater significance to pressure sensitivity than mineral composition. ?? 1987.
Digital Correlation Microwave Polarimetry: Analysis and Demonstration
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)
2000-01-01
The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
Sensitivity analysis of complex coupled systems extended to second and higher order derivatives
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
In design of engineering systems, the what if questions often arise such as: what will be the change of the aircraft payload, if the wing aspect ratio is incremented by 10 percent. Answers to such questions are commonly sought by incrementing the pertinent variable, and reevaluating the major disciplinary analyses involved. These analyses are contributed by engineering disciplines that are, usually, coupled, as are the aerodynamics, structures, and performance in the context of the question above. The what if questions can be answered precisely by computation of the derivatives. A method for calculation of the first derivatives has been developed previously. An algorithm is presented for calculation of the second and higher order derivatives.
[A peak recognition algorithm designed for chromatographic peaks of transformer oil].
Ou, Linjun; Cao, Jian
2014-09-01
In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.
1992-01-01
An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.
Cortical geometry as a determinant of brain activity eigenmodes: Neural field analysis
NASA Astrophysics Data System (ADS)
Gabay, Natasha C.; Robinson, P. A.
2017-09-01
Perturbation analysis of neural field theory is used to derive eigenmodes of neural activity on a cortical hemisphere, which have previously been calculated numerically and found to be close analogs of spherical harmonics, despite heavy cortical folding. The present perturbation method treats cortical folding as a first-order perturbation from a spherical geometry. The first nine spatial eigenmodes on a population-averaged cortical hemisphere are derived and compared with previous numerical solutions. These eigenmodes contribute most to brain activity patterns such as those seen in electroencephalography and functional magnetic resonance imaging. The eigenvalues of these eigenmodes are found to agree with the previous numerical solutions to within their uncertainties. Also in agreement with the previous numerics, all eigenmodes are found to closely resemble spherical harmonics. The first seven eigenmodes exhibit a one-to-one correspondence with their numerical counterparts, with overlaps that are close to unity. The next two eigenmodes overlap the corresponding pair of numerical eigenmodes, having been rotated within the subspace spanned by that pair, likely due to second-order effects. The spatial orientations of the eigenmodes are found to be fixed by gross cortical shape rather than finer-scale cortical properties, which is consistent with the observed intersubject consistency of functional connectivity patterns. However, the eigenvalues depend more sensitively on finer-scale cortical structure, implying that the eigenfrequencies and consequent dynamical properties of functional connectivity depend more strongly on details of individual cortical folding. Overall, these results imply that well-established tools from perturbation theory and spherical harmonic analysis can be used to calculate the main properties and dynamics of low-order brain eigenmodes.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Post-Optimality Analysis In Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation
NASA Astrophysics Data System (ADS)
Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.
2016-03-01
Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.
DiMarco, Brian N.; Troian-Gautier, Ludovic; Sampaio, Renato N.; ...
2018-01-01
Two sensitizers, [Ru(bpy) 2 (dcb)] 2+ ( RuC ) and [Ru(bpy) 2 (dpb)] 2+ ( RuP ), were anchored to mesoporous TiO 2 thin films and utilized to sensitize the reaction of TiO 2 electrons with oxidized triphenylamines to visible light in CH 3 CN electrolytes.
Deception and false belief in paranoia: modelling theory of mind stories.
Shryane, Nick M; Corcoran, Rhiannon; Rowse, Georgina; Moore, Rosanne; Cummins, Sinead; Blackwood, Nigel; Howard, Robert; Bentall, Richard P
2008-01-01
This study used Item Response Theory (IRT) to model the psychometric properties of a Theory of Mind (ToM) stories task. The study also aimed to determine whether the ability to understand states of false belief in others and the ability to understand another's intention to deceive are separable skills, and to establish which is more sensitive to the presence of paranoia. A large and diverse clinical and nonclinical sample differing in levels of depression and paranoid ideation performed a ToM stories task measuring false belief and deception at first and second order. A three-factor IRT model was found to best fit the data, consisting of first- and second-order deception factors and a single false-belief factor. The first-order deception and false-belief factors had good measurement properties at low trait levels, appropriate for samples with reduced ToM ability. First-order deception and false beliefs were both sensitive to paranoid ideation with IQ predicting performance on false belief items. Separable abilities were found to underlie performance on verbal ToM tasks. However, paranoia was associated with impaired performance on both false belief and deception understanding with clear impairment at the simplest level of mental state attribution.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.
NASA Astrophysics Data System (ADS)
Rasouli, Zolaikha; Ghavami, Raouf
2018-02-01
A simple, sensitive and efficient colorimetric assay platform for the determination of Cu2 + was proposed with the aim of developing sensitive detection based on the aggregation of AuNPs in presence of a histamine H2-receptor antagonist (famotidine, FAM) as recognition site. This study is the first to demonstrate that the molar extinction coefficients of the complexes formed by FAM and Cu2 + are very low (by analyzing the chemometrics methods on the first order data arising from different metal to ligand ratio method), leading to the undesirable sensitivity of FAM-based assays. To resolve the problem of low sensitivity, the colorimetry method based on the Cu2 +-induced aggregation of AuNPs functionalized with FAM was introduced. This procedure is accompanied by a color change from bright red to blue which can be observed with the naked eyes. Detection sensitivity obtained by the developed method increased about 100 fold compared with the spectrophotometry method. This sensor exhibited a good linear relation between the absorbance ratios at 670 to 520 nm (A670/520) and the concentration in the range 2-110 nM with LOD = 0.76 nM. The satisfactory analytical performance of the proposed sensor facilitates the development of simple and affordable UV-Vis chemosensors for environmental applications.
Accuracy analysis and design of A3 parallel spindle head
NASA Astrophysics Data System (ADS)
Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan
2016-03-01
As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.
Allred, J. M.; Taddei, K. M.; Bugaris, D. E.; ...
2014-09-19
We present neutron dffraction analysis of BaFe 2(As 1-xP x) 2 over a wide temperature (10 to 300 K) and compositional (0.11 < x < 0.79) range, including the normal state, the magnetically ordered state, and the superconducting state. The paramagnetic to spin-density wave and orthorhombic to tetragonal transitions are first order and coincident within the sensitivity of our measurements (~ 0:5 K). Extrapolation of the orthorhombic order parameter down to zero suggests that structural quantum criticality cannot exist at compositions higher than x = 0.28, which is much lower than values determined using other methods, but in good agreementmore » with our observations of the actual phase stability range. Lastly, the onset of spin-density wave order shows a stronger structural anomaly than the charge-doped system in the form of an enhancement of the c/a ratio below the transition.« less
Virus Sensitivity Index of UV disinfection.
Tang, Walter Z; Sillanpää, Mika
2015-01-01
A new concept of Virus Sensitivity Index (VSI) is defined as the ratio between the first-order inactivation rate constant of a virus, ki, and that of MS2-phage during UV disinfection, kr. MS2-phage is chosen as the reference virus because it is recommended as a virus indicator during UV reactor design and validation by the US Environmental Protection Agency. VSI has wide applications in research, design, and validation of UV disinfection systems. For example, it can be used to rank the UV disinfection sensitivity of viruses in reference to MS2-phage. There are four major steps in deriving the equation between Hi/Hr and 1/VSI. First, the first-order inactivation rate constants are determined by regression analysis between Log I and fluence required. Second, the inactivation rate constants of MS2-phage are statistically analysed at 3, 4, 5, and 6 Log I levels. Third, different VSI values are obtained from the ki of different viruses dividing by the kr of MS2-phage. Fourth, correlation between Hi/Hr and 1/VSI is analysed by using linear, quadratic, and cubic models. As expected from the theoretical analysis, a linear relationship adequately correlates Hi/Hr and 1/VSI without an intercept. VSI is used to quantitatively predict the UV fluence required for any virus at any log inactivation (Log I). Four equations were developed at 3, 4, 5, and 6 Log I. These equations have been validated using external data which are not used for the virus development. At Log I less than 3, the equation tends to under-predict the required fluence at both low Log I such as 1 and 2 Log I. At Log I greater than 3 Log I, the equation tends to over-predict the fluence required. The reasons for these may very likely be due to the shoulder at the beginning and the tailing at the end of the collimated beam test experiments. At 3 Log I, the error percentage is less than 6%. The VSI is also used to predict inactivation rate constants under two different UV disinfection scenarios such as under sunlight and different virus aggregates. The correlation analysis shows that viruses will be about 40% more sensitive to sunlight than to UV254. On the other hand, virus size of 500 nm will reduce their VSI by 10%. This is the first attempt to use VSI to predict the required fluence at any given Log I. The equation can be used to quantitatively evaluate other parameters influencing UV disinfection. These factors include environmental species, antibiotic-resistant bacteria or genes, photo and dark repair, water quality such as suspended solids, and UV transmittance.
NASA Astrophysics Data System (ADS)
Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu
2017-12-01
In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.
A fractal analysis of protein to DNA binding kinetics using biosensors.
Sadana, Ajit
2003-08-01
A fractal analysis of a confirmative nature only is presented for the binding of estrogen receptor (ER) in solution to its corresponding DNA (estrogen response element, ERE) immobilized on a sensor chip surface [J. Biol. Chem. 272 (1997) 11384], and for the cooperative binding of human 1,25-dihydroxyvitamin D(3) receptor (VDR) to DNA with the 9-cis-retinoic acid receptor (RXR) [Biochemistry 35 (1996) 3309]. Ligands were also used to modulate the first reaction. Data taken from the literature may be modeled by using a single- or a dual-fractal analysis. Relationships are presented for the binding rate coefficient as a function of either the analyte concentration in solution or the fractal dimension that exists on the biosensor surface. The binding rate expressions developed exhibit a wide range of dependence on the degree of heterogeneity that exists on the surface, ranging from sensitive (order of dependence equal to 1.202) to very sensitive (order of dependence equal to 12.239). In general, the binding rate coefficient increases as the degree of heterogeneity or the fractal dimension of the surface increases. The predictive relationships presented provide further physical insights into the reactions occurring on the biosensor surface. Even though these reactions are occurring on the biosensor surface, the relationships presented should assist in understanding and in possibly manipulating the reactions occurring on cellular surfaces.
Reduced size first-order subsonic and supersonic aeroelastic modeling
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.
Application of dermoscopy image analysis technique in diagnosing urethral condylomata acuminata.
Zhang, Yunjie; Jiang, Shuang; Lin, Hui; Guo, Xiaojuan; Zou, Xianbiao
2018-01-01
In this study, cases with suspected urethral condylomata acuminata were examined by dermoscopy, in order to explore an effective method for clinical. To study the application of dermoscopy image analysis technique in clinical diagnosis of urethral condylomata acuminata. A total of 220 suspected urethral condylomata acuminata were clinically diagnosed first with the naked eyes, and then by using dermoscopy image analysis technique. Afterwards, a comparative analysis was made for the two diagnostic methods. Among the 220 suspected urethral condylomata acuminata, there was a higher positive rate by dermoscopy examination than visual observation. Dermoscopy examination technique is still restricted by its inapplicability in deep urethral orifice and skin wrinkles, and concordance between different clinicians may also vary. Dermoscopy image analysis technique features a high sensitivity, quick and accurate diagnosis and is non-invasive, and we recommend its use.
A Very Much Faster and More Sensitive In Situ Stable Isotope Analysis Instrument
NASA Astrophysics Data System (ADS)
Coleman, M.; Christensen, L. E.; Kriesel, J. M.; Kelly, J. F.; Moran, J. J.; Vance, S.
2016-10-01
We are developing, Capillary Absorption Spectrometry (CAS) for H and O stable isotope analyses, giving > 4 orders of magnitude improved sensitivity, allowing analysis of 5 nano-moles of water and coupled to laser sampling to free water from hydrated minerals and ice.
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
support both mode-based structural representations and time-dependent, nonlinear finite element structural dynamics. This interim report describes...Adaptation, & Sensitivity Toolkit • Elasticity, heat transfer, & compressible flow • Adjoint solver for sensitivity analysis • High-order finite elements ...PROGRAM ELEMENT NUMBER 62201F 6. AUTHOR(S) Richard D. Snyder 5d. PROJECT NUMBER 2401 5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER Q1FS 7
Integrated on-chip derivatization and electrophoresis for the rapid analysis of biogenic amines.
Beard, Nigel P; Edel, Joshua B; deMello, Andrew J
2004-07-01
We demonstrate the monolithic integration of a chemical reactor with a capillary electrophoresis device for the rapid and sensitive analysis of biogenic amines. Fluorescein isothiocyanate (FITC) is widely employed for the analysis of amino-group containing analytes. However, the slow reaction kinetics hinders the use of this dye for on-chip labeling applications. Other alternatives are available such as o-phthaldehyde (OPA), however, the inferior photophysical properties and the UV lambdamax present difficulties when using common excitation sources leading to a disparity in sensitivity. Consequently, we present for the first time the use of dichlorotriazine fluorescein (DTAF) as a superior in situ derivatizing agent for biogenic amines in microfluidic devices. The developed microdevice employs both hydrodynamic and electroosmotic flow, facilitating the creation of a polymeric microchip to perform both precolumn derivatization and electrophoretic analysis. The favorable photophysical properties of the DTAF and its fast reaction kinetics provide detection limits down to 1 nM and total analysis times (including on-chip mixing and reaction) of <60 s. The detection limits are two orders of magnitude lower than current limits obtained with both FITC and OPA. The optimized microdevice is also employed to probe biogenic amines in real samples.
Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads
NASA Technical Reports Server (NTRS)
Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)
2002-01-01
Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.
Thou Shalt Be Reproducible! A Technology Perspective
Mair, Patrick
2016-01-01
This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486
Processing Deficits of Motion of Contrast-Modulated Gratings in Anisometropic Amblyopia
Liu, Zhongjian; Hu, Xiaopeng; Yu, Yong-Qiang; Zhou, Yifeng
2014-01-01
Several studies have indicated substantial processing deficits for static second-order stimuli in amblyopia. However, less is known about the perception of second-order moving gratings. To investigate this issue, we measured the contrast sensitivity for second-order (contrast-modulated) moving gratings in seven anisometropic amblyopes and ten normal controls. The measurements were performed with non-equated carriers and a series of equated carriers. For comparison, the sensitivity for first-order motion and static second-order stimuli was also measured. Most of the amblyopic eyes (AEs) showed reduced sensitivity for second-order moving gratings relative to their non-amblyopic eyes (NAEs) and the dominant eyes (CEs) of normal control subjects, even when the detectability of the noise carriers was carefully controlled, suggesting substantial processing deficits of motion of contrast-modulated gratings in anisometropic amblyopia. In contrast, the non-amblyopic eyes of the anisometropic amblyopes were relatively spared. As a group, NAEs showed statistically comparable performance to CEs. We also found that contrast sensitivity for static second-order stimuli was strongly impaired in AEs and part of the NAEs of anisometropic amblyopes, consistent with previous studies. In addition, some amblyopes showed impaired performance in perception of static second-order stimuli but not in that of second-order moving gratings. These results may suggest a dissociation between the processing of static and moving second-order gratings in anisometropic amblyopia. PMID:25409477
Yari, Shamsi; Hadizadeh Tasbiti, Alireza; Ghanei, Mostafa; Shokrgozar, Mohammad Ali; Fateh, Abolfazl; Mahdian, Reza; Yari, Fatemeh; Bahrmand, Ahmadreza
2017-01-01
Multidrug-resistant tuberculosis (MDR-TB) is a form of TB caused by Mycobacterium tuberculosis (M. tuberculosis) that do not respond to, at least, isoniazid and rifampicin, the two most powerful, first-line (or standard) anti-TB drugs. Novel intervention strategies for eliminating this disease were based on finding proteins that can be used for designing new drugs or new and reliable kits for diagnosis. The aim of this study was to compare the protein profiles of MDR-TB with sensitive isolates. Proteomic analysis of M. tuberculosis MDR-TB and sensitive isolates was obtained with ion exchange chromatography coupled with MALDI-TOF-TOF (matrix-assisted laser desorption/ionization) in order to identify individual proteins that have different expression in MDR-TB to be used as a drug target or diagnostic marker for designing valuable TB vaccines or TB rapid tests. We identified eight proteins in MDR-TB isolates, and analyses showed that these proteins are absent in M. tuberculosis-sensitive isolates: (Rv2140c, Rv0009, Rv1932, Rv0251c, Rv2558, Rv1284, Rv3699 and MMP major membrane proteins). These data will provide valuable clues in further investigation for suitable TB rapid tests or drug targets against drug-resistant and sensitive M. tuberculosis isolates.
NASA Astrophysics Data System (ADS)
Khoshgoftar, M. J.; Mirzaali, M. J.; Rahimi, G. H.
2015-11-01
Recently application of functionally graded materials(FGMs) have attracted a great deal of interest. These materials are composed of various materials with different micro-structures which can vary spatially in FGMs. Such composites with varying thickness and non-uniform pressure can be used in the aerospace engineering. Therefore, analysis of such composite is of high importance in engineering problems. Thermoelastic analysis of functionally graded cylinder with variable thickness under non-uniform pressure is considered. First order shear deformation theory and total potential energy approach is applied to obtain the governing equations of non-homogeneous cylinder. Considering the inner and outer solutions, perturbation series are applied to solve the governing equations. Outer solution for out of boundaries and more sensitive variable in inner solution at the boundaries are considered. Combining of inner and outer solution for near and far points from boundaries leads to high accurate displacement field distribution. The main aim of this paper is to show the capability of matched asymptotic solution for different non-homogeneous cylinders with different shapes and different non-uniform pressures. The results can be used to design the optimum thickness of the cylinder and also some properties such as high temperature residence by applying non-homogeneous material.
Superposition-Based Analysis of First-Order Probabilistic Timed Automata
NASA Astrophysics Data System (ADS)
Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph
This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.
A (201)Hg+ Comagnetometer for (199)Hg+ Trapped Ion Space Atomic Clocks
NASA Technical Reports Server (NTRS)
Burt, Eric A.; Taghavi, Shervin; Tjoelker, Robert L.
2011-01-01
A method has been developed for unambiguously measuring the exact magnetic field experienced by trapped mercury ions contained within an atomic clock intended for space applications. In general, atomic clocks are insensitive to external perturbations that would change the frequency at which the clocks operate. On a space platform, these perturbative effects can be much larger than they would be on the ground, especially in dealing with the magnetic field environment. The solution is to use a different isotope of mercury held within the same trap as the clock isotope. The magnetic field can be very accurately measured with a magnetic-field-sensitive atomic transition in the added isotope. Further, this measurement can be made simultaneously with normal clock operation, thereby not degrading clock performance. Instead of using a conventional magnetometer to measure ambient fields, which would necessarily be placed some distance away from the clock atoms, first order field-sensitive atomic transition frequency changes in the atoms themselves determine the variations in the magnetic field. As a result, all ambiguity over the exact field value experienced by the atoms is removed. Atoms used in atomic clocks always have an atomic transition (often referred to as the clock transition) that is sensitive to magnetic fields only in second order, and usually have one or more transitions that are first-order field sensitive. For operating parameters used in the (199)Hg(+) clock, the latter can be five orders of magnitude or more sensitive to field fluctuations than the clock transition, thereby providing an unambiguous probe of the magnetic field strength.
Measurements of the CMB Polarization with POLARBEAR and the Optical Performance of the Simons Array
NASA Astrophysics Data System (ADS)
Takayuki Matsuda, Frederick; POLARBEAR Collaboration
2017-06-01
POLARBEAR is a ground-based polarization sensitive Cosmic Microwave Background (CMB) experiment installed on the 2.5 m aperture Gregorian-Dragone type Huan Tran Telescope located in the Atacama desert in Chile. POLARBEAR is designed to conduct broad surveys at 150 GHz to measure the CMB B-mode polarization signal from inflationary gravitational waves at large angular scales and from gravitational lensing at small angular scales. POLARBEAR started observations in 2012. First season results on gravitational lensing B-mode measurements were published in 2014, and the data analysis of further seasons is in progress. In order to further increase measurement sensitivity, in 2018 the experiment will be upgraded to the Simons Array comprising of three telescopes, each with improved receiver optics using alumina lenses. In order to further expand the observational range, the second and third receiver optics designs were further modified for improved optical performance across the frequencies of 95, 150, 220, and 280 GHz. The diffraction limited field of view was increased especially for the higher frequencies to span a full 4.5 degrees diameter field of view of the telescope. The Simons Array will have a total of 22,764 detectors within this field of view. The Simons Array is projected to put strong constraints on both the measurements of the tensor-to-scalar ratio for inflationary cosmology and the sum of the neutrino masses. I will report on the status of current observations and analysis of the first two observation seasons of POLARBEAR as well as the optics design development of the Simons Array receivers.
JWST ISIM Distortion Analysis Challenge
NASA Technical Reports Server (NTRS)
Cifie, Emmanuel; Matzinger, Liz; Kuhn, Jonathan; Fan, Terry
2004-01-01
Very tight distortion requirements are imposed on the JWST's ISM structure due to the sensitivity of the telescope's mirror segment and science instrument positioning. The ISIM structure is a three dimensional truss with asymmetric gusseting and metal fittings. One of the primary challenges for ISIM's analysis team is predicting the thermal distortion of the structure both from the bulk cooldown from ambient to cryo, and the smaller temperature changes within the cryogenic operating environment. As a first cut to estimate thermal distortions, a finite element model of bar elements was created. Elements representing joint areas and metal fittings use effective properties that match the behavior of the stack-up of the composite tube, gusset and adhesive under mechanical and thermal loads. These properties were derived by matching tip deflections of a solid model simplified T-joint. Because of the structure s asymmetric gusseting, this effective property model is a first attempt at predicting rotations that cannot be captured with a smeared CTE approach. In addition to the finite element analysis, several first order calculations have been performed to gauge the feasibility of the material design. Because of the stringent thermal distortion requirements at cryogenic temperatures, a composite tube material with near zero or negative CTE is required. A preliminary hand analysis of the contribution of the various components along the distortion path between FGS and the other instruments, neglecting second order effects were examined. A plot of bounding tube longitudinal and transverse CTEs for thermal stability requirements was generated to help determine the feasibility of meeting these requirements. This analysis is a work in progress en route to a large degree of freedom hi-fidelity FEA model for distortion analysis. Methods of model reduction, such as superelements, are currently being investigated.
Robust Sensitivity Analysis of Courses of Action Using an Additive Value Model
2008-03-01
According to Clemen , sensitivity analysis answers, “What makes a difference in this decision?” (2001:175). Sensitivity analysis can also indicate...alternative to change. These models look for the new weighting that causes a specific alternative to rank above all others. 19 Barron and Schmidt first... Schmidt , 1988:123). A smaller objective function value indicates greater sensitivity. Wolters and Mareschal propose a similar approach using goal
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
A Scalable Nonuniform Pointer Analysis for Embedded Program
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we present a scalable pointer analysis for embedded applications that is able to distinguish between instances of recursively defined data structures and elements of arrays. The main contribution consists of an efficient yet precise algorithm that can handle multithreaded programs. We first perform an inexpensive flow-sensitive analysis of each function in the program that generates semantic equations describing the effect of the function on the memory graph. These equations bear numerical constraints that describe nonuniform points-to relationships. We then iteratively solve these equations in order to obtain an abstract storage graph that describes the shape of data structures at every point of the program for all possible thread interleavings. We bring experimental evidence that this approach is tractable and precise for real-size embedded applications.
Non-robust numerical simulations of analogue extension experiments
NASA Astrophysics Data System (ADS)
Naliboff, John; Buiter, Susanne
2016-04-01
Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.
Quantification of atmospheric methane oxidation in glacier forefields: Initial survey results
NASA Astrophysics Data System (ADS)
Nauer, Philipp A.; Schroth, Martin H.; Pinto, Eric A.; Zeyer, Josef
2010-05-01
The oxidation of CH4 by methanotrophic bacteria is the only known terrestrial sink for atmospheric CH4. Aerobic methanotrophs are active in soils and sediments under various environmental conditions. However, little is known about the activity and abundance of methanotrophs in pioneering ecosystems and their role in succession. In alpine environments, receding glaciers pose a unique opportunity to investigate soil development and ecosystem succession. In an initial survey during summer and autumn 2009 we probed several locations in the forefields of four glaciers in the Swiss Alps to quantify the turnover of atmospheric methane in recently exposed soils. Three glacier forefields (the Stein, Steinlimi and Tiefen) are situated on siliceous bedrock, while one (the Griessen) is situated on calcareous bedrock. We sampled soil air from different depths to generate CH4 concentration profiles for qualitative analysis. At selected locations we applied surface Gas Push-Pull Tests (GPPT) to estimate first-order rate coefficients of CH4 oxidation. The test consists of a controlled injection of the reactants CH4 and O2 and the tracer Ar into and out of the soil at the same location. A top-closed steel cylinder previously emplaced in the soil encloses the injected gas mixture to ensure sufficient reaction times. Rate coefficients can be derived from differences of reactant and tracer breakthrough curves. In one GPPT we employed 13C-CH4 and measured the evolution of δ13C of extracted CO2. To confirm rate coefficients obtained by GPPTs we estimated effective soil diffusivity from soil core samples and fitted a diffusion-consumption model to our profile data. A qualitative analysis of the concentration profiles showed little activity in the forefields on siliceous bedrock, with only one out of fifteen locations exhibiting substantially lower CH4 concentrations in the soil compared to the atmosphere. The surface GPPTs with conventional CH4 at the active location were not sensitive enough to derive meaningful first-order rate coefficients of CH4 oxidation. The more sensitive GPPT with 13C-CH4 resulted in a coefficient of 0.025 h-1, close to the value of 0.011 h-1 estimated from the corresponding concentration profile. Activities in the forefield on calcareous bedrock were substantially higher, with decreased CH4 concentrations in the soil at three out of five locations. Estimated first-order rate coefficients from GPPT and profile at one selected location were 0.6 h-1 and 1.3 h-1, respectively, one to two orders of magnitude higher than values from the siliceous forefield. Additional analysis by quantitative PCR revealed substantially lower numbers of pmoA gene copies per g soil at the active location in the siliceous forefield compared to the selected location in the calcareous forefield. Reasons for these differences in activity and abundance are still unknown and will be subject of further investigations in an upcoming field campaign. The GPPT in combination with δ13C analysis of extracted CO2 appeared to be a functioning approach to sensitively quantify low CH4 turnover.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Disgust sensitivity and eating disorder symptoms in a non-clinical population.
Mayer, Birgit; Muris, Peter; Bos, Arjan E R; Suijkerbuijk, Chantal
2008-12-01
In order to further explore the relationship between disgust sensitivity and eating disorder symptoms, 2 studies were carried out. In the first study, 352 higher education students (166 women, 186 men) completed a set of questionnaires measuring various aspects of disgust sensitivity and eating disorder symptoms. A correlational analysis revealed that there were few significant correlations between disgust scales and eating pathology scores. One exception was the relation between disgust sensitivity and external eating behavior, although this link only emerged in women. To investigate this relationship in more detail, Study 2 confronted women high (n=29) and low (n=30) on external eating behavior with a series of disgusting and neutral pictures. It was hypothesized that women who scored high on external eating would display shorter viewing times of disgusting pictures (i.e., show more avoidance behavior) than women scoring low on external eating. However, this hypothesis was not confirmed by the data. Altogether, the results of these studies suggest that there seems to be no convincing relationship between disgust sensitivity and eating disorder symptomatology, thereby casting doubts on the role of this individual difference factor in the development of eating pathology.
Decay Kinetics of UV-Sensitive Materials: An Introductory Chemistry Experiment
ERIC Educational Resources Information Center
Via, Garrhett; Williams, Chelsey; Dudek, Raymond; Dudek, John
2015-01-01
First-order kinetic decay rates can be obtained by measuring the time-dependent reflection spectra of ultraviolet-sensitive objects as they returned from their excited, colored state back to the ground, colorless state. In this paper, a procedure is described which provides an innovative and unique twist on standard, undergraduate, kinetics…
NASA Astrophysics Data System (ADS)
Yu, Yali; Wang, Mengxia; Lima, Dimas
2018-04-01
In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.
NASA Astrophysics Data System (ADS)
Ferhat, Ipar
With increasing advancement in material science and computational power of current computers that allows us to analyze high dimensional systems, very light and large structures are being designed and built for aerospace applications. One example is a reflector of a space telescope that is made of membrane structures. These reflectors are light and foldable which makes the shipment easy and cheaper unlike traditional reflectors made of glass or other heavy materials. However, one of the disadvantages of membranes is that they are very sensitive to external changes, such as thermal load or maneuvering of the space telescope. These effects create vibrations that dramatically affect the performance of the reflector. To overcome vibrations in membranes, in this work, piezoelectric actuators are used to develop distributed controllers for membranes. These actuators generate bending effects to suppress the vibration. The actuators attached to a membrane are relatively thick which makes the system heterogeneous; thus, an analytical solution cannot be obtained to solve the partial differential equation of the system. Therefore, the Finite Element Model is applied to obtain an approximate solution for the membrane actuator system. Another difficulty that arises with very flexible large structures is the dimension of the discretized system. To obtain an accurate result, the system needs to be discretized using smaller segments which makes the dimension of the system very high. This issue will persist as long as the improving technology will allow increasingly complex and large systems to be designed and built. To deal with this difficulty, the analysis of the system and controller development to suppress the vibration are carried out using vector second order form as an alternative to vector first order form. In vector second order form, the number of equations that need to be solved are half of the number equations in vector first order form. Analyzing the system for control characteristics such as stability, controllability and observability is a key step that needs to be carried out before developing a controller. This analysis determines what kind of system is being modeled and the appropriate approach for controller development. Therefore, accuracy of the system analysis is very crucial. The results of the system analysis using vector second order form and vector first order form show the computational advantages of using vector second order form. Using similar concepts, LQR and LQG controllers, that are developed to suppress the vibration, are derived using vector second order form. To develop a controller using vector second order form, two different approaches are used. One is reducing the size of the Algebraic Riccati Equation to half by partitioning the solution matrix. The other approach is using the Hamiltonian method directly in vector second order form. Controllers are developed using both approaches and compared to each other. Some simple solutions for special cases are derived for vector second order form using the reduced Algebraic Riccati Equation. The advantages and drawbacks of both approaches are explained through examples. System analysis and controller applications are carried out for a square membrane system with four actuators. Two different systems with different actuator locations are analyzed. One system has the actuators at the corners of the membrane, the other has the actuators away from the corners. The structural and control effect of actuator locations are demonstrated with mode shapes and simulations. The results of the controller applications and the comparison of the vector first order form with the vector second order form demonstrate the efficacy of the controllers.
Conesa, Celia; FitzGerald, Richard J
2013-10-23
The kinetics and thermodynamics of the thermal inactivation of Corolase PP in two different whey protein concentrate (WPC) hydrolysates with degree of hydrolysis (DH) values of ~10 and 21%, and at different total solids (TS) levels (from 5 to 30% w/v), were studied. Inactivation studies were performed in the temperature range from 60 to 75 °C, and residual enzyme activity was quantified using the azocasein assay. The inactivation kinetics followed a first-order model. Analysis of the activation energy, thermodynamic parameters, and D and z values, demonstrated that the inactivation of Corolase PP was dependent on solution TS. The intestinal enzyme preparation was more heat sensitive at low TS. Moreover, it was also found that the enzyme was more heat sensitive in solutions at higher DH.
Hill, Mary C.
2010-01-01
Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.
Salinas, María; Flores, Emilio; López-Garrigós, Maite; Díaz, Elena; Esteban, Patricia; Leiva-Salinas, Carlos
2017-01-01
To apply a continual improvement model to develop an algorithm for ordering laboratory tests to diagnose acute pancreatitis in a hospital emergency department. Quasi-experimental study using the continual improvement model (plan, do, check, adjust cycles) in 2 consecutive phases in emergency patients: amylase and lipase results were used to diagnose acute pancreatitis in the first phase; in the second, only lipase level was first determined; amylase testing was then ordered only if the lipase level fell within a certain range. We collected demographic data, number amylase and lipase tests ordered and the findings, final diagnosis, and the results of a questionnaire to evaluate satisfaction with emergency care. The first phase included 517 patients, of whom 20 had acute pancreatitis. For amylase testing sensitivity was 0.70; specificity, 0.85; positive predictive value (PPV), 17; and negative predictive value (NPV), 0.31. For lipase testing these values were sensitivity, 0.85; specificity, 0.96; PPV, 21, and NPV, 0.16. When both tests were done, sensitivity was 0.85; specificity 0.99; PPV, 85; and NPV, 0.15. The second phase included data for 4815 patients, 118 of whom had acute pancreatitis. The measures of diagnostic yield for the new algorithm were sensitivity, 0.92; specificity, 0.98; PPV, 46; and NPV, 0.08]. This study demonstrates a process for developing a protocol to guide laboratory testing in acute pancreatitis in the hospital emergency department. The proposed sequence of testing for pancreatic enzyme levels can be effective for diagnosing acute pancreatitis in patients with abdominal pain.
Performance of the STIS CCD Dark Rate Temperature Correction
NASA Astrophysics Data System (ADS)
Branton, Doug; STScI STIS Team
2018-06-01
Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, R. R.
1989-01-01
A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
DiMarco, Brian N.; Troian-Gautier, Ludovic; Sampaio, Renato N.
2017-01-01
Two sensitizers, [Ru(bpy)2(dcb)]2+ (RuC) and [Ru(bpy)2(dpb)]2+ (RuP), where bpy is 2,2′-bipyridine, dcb is 4,4′-dicarboxylic acid-2,2′-bipyridine and dpb is 4,4′-diphosphonic acid-2,2′-bipyridine, were anchored to mesoporous TiO2 thin films and utilized to sensitize the reaction of TiO2 electrons with oxidized triphenylamines, TiO2(e–) + TPA+ → TiO2 + TPA, to visible light in CH3CN electrolytes. A family of four symmetrically substituted triphenylamines (TPAs) with formal Eo(TPA+/0) reduction potentials that spanned a 0.5 eV range was investigated. Surprisingly, the reaction followed first-order kinetics for two TPAs that provided the largest thermodynamic driving force. Such first-order reactivity indicates a strong Coulombic interaction between TPA+ and TiO2 that enables the injected electron to tunnel back in one concerted step. The kinetics for the other TPA derivatives were non-exponential and were modelled with the Kohlrausch–William–Watts (KWW) function. A Perrin-like reaction sphere model is proposed to rationalize the kinetic data. The activation energies were the same for all of the TPAs, within experimental error. The average rate constants were found to increase with the thermodynamic driving force, consistent with electron transfer in the Marcus normal region. PMID:29629161
Polypeptide Functional Surface for the Aptamer Immobilization: Electrochemical Cocaine Biosensing.
Bozokalfa, Guliz; Akbulut, Huseyin; Demir, Bilal; Guler, Emine; Gumus, Z Pınar; Odaci Demirkol, Dilek; Aldemir, Ebru; Yamada, Shuhei; Endo, Takeshi; Coskunol, Hakan; Timur, Suna; Yagci, Yusuf
2016-04-05
Electroanalytical technologies as a beneficial subject of modern analytical chemistry can play an important role for abused drug analysis which is crucial for both legal and social respects. This article reports a novel aptamer-based biosensing procedure for cocaine analysis by combining the advantages of aptamers as selective recognition elements with the well-known advantages of biosensor systems such as the possibility of miniaturization and automation, easy fabrication and modification, low cost, and sensitivity. In order to construct the aptasensor platform, first, polythiophene bearing polyalanine homopeptide side chains (PT-Pala) was electrochemically coated onto the surface of an electrode and then cocaine aptamer was attached to the polymer via covalent conjugation chemistry. The stepwise modification of the surface was confirmed by electrochemical characterization. The designed biosensing system was applied for the detection of cocaine and its metabolite, benzoylecgonine (BE), which exhibited a linear correlation in the range from 2.5 up to 10 nM and 0.5 up to 50 μM for cocaine and BE, respectively. In order to expand its practical application, the proposed method was successfully tested for the analysis of synthetic biological fluids.
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1991-01-01
The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.
Budget analysis of Escherichia coli at a southern Lake Michigan Beach
Thupaki, P.; Phanikumar, M.S.; Beletsky, D.; Schwab, D.J.; Nevers, M.B.; Whitman, R.L.
2010-01-01
Escherichia coli (EC) concentrations at two beaches impacted by river plume dynamics in southern Lake Michigan were analyzed using three-dimensional hydrodynamic and transport models. The relative importance of various physical and biological processes influencing the fate and transport of EC were examined via budget analysis and a first-order sensitivity analysis of model parameters. The along-shore advective fluxofEC(CFU/m2·s)was found to be higher compared to its crossshore counterpart; however, the sum of diffusive and advective components was of a comparable magnitude in both directions showing the importance of cross-shore exchange in EC transport. Examination of individual terms in the EC mass balance equation showed that vertical turbulent mixing in the water column dominated the overall EC transport for the summer conditions simulated. Dilution due to advection and diffusion accounted for a large portion of the total EC budget in the nearshore, and the net EC loss rate within the water column (CFU/m3·s) was an order of magnitude smaller compared to the horizontal and vertical transport rates. This result has important implications for modeling EC at recreational beaches; however, the assessment of the magnitude of EC loss rate is complicated due to the strong coupling between vertical exchange and depth-dependent EC loss processes such as sunlight inactivation and settling. Sensitivity analysis indicated that solar inactivation has the greatest impact on EC loss rates. Although these results are site-specific, they clearly bring out the relative importance of various processes involved.
Gilchrist, Elizabeth S; Nesterenko, Pavel N; Smith, Norman W; Barron, Leon P
2015-03-20
There has recently been increased interest in coupling ion chromatography (IC) to high resolution mass spectrometry (HRMS) to enable highly sensitive and selective analysis. Herein, the first comprehensive study focusing on the direct coupling of suppressed IC to HRMS without the need for post-suppressor organic solvent modification is presented. Chromatographic selectivity and added HRMS sensitivity offered by organic solvent-modified IC eluents on a modern hyper-crosslinked polymeric anion-exchange resin (IonPac AS18) are shown using isocratic eluents containing 5-50 mM hydroxide with 0-80% methanol or acetonitrile for a range of low molecular weight anions (<165 Da). Comprehensive experiments on IC thermodynamics over a temperature range between 20-45 °C with the eluent containing up to 60% of acetonitrile or methanol revealed markedly different retention behaviour and selectivity for the selected analytes on the same polymer based ion-exchange resin. Optimised sensitivity with HRMS was achieved with as low as 30-40% organic eluent content. Analytical performance characteristics are presented and compared with other IC-MS based works. This study also presents the first application of IC-HRMS to forensic detection of trace low-order anionic explosive residues in latent human fingermarks. Copyright © 2015 Elsevier B.V. All rights reserved.
High order statistical signatures from source-driven measurements of subcritical fissile systems
NASA Astrophysics Data System (ADS)
Mattingly, John Kelly
1998-11-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.
Research on fiber-optic cantilever-enhanced photoacoustic spectroscopy for trace gas detection
NASA Astrophysics Data System (ADS)
Chen, Ke; Zhou, Xinlei; Gong, Zhenfeng; Yu, Shaochen; Qu, Chao; Guo, Min; Yu, Qingxu
2018-01-01
We demonstrate a new scheme of cantilever-enhanced photoacoustic spectroscopy, combining a sensitivity-improved fiber-optic cantilever acoustic sensor with a tunable high-power fiber laser, for trace gas detection. The Fabry-Perot interferometer based cantilever acoustic sensor has advantages such as high sensitivity, small size, easy to install and immune to electromagnetic. Tunable erbium-doped fiber ring laser with an erbium-doped fiber amplifier is used as the light source for acoustic excitation. In order to improve the sensitivity for photoacoustic signal detection, a first-order longitudinal resonant photoacoustic cell with the resonant frequency of 1624 Hz and a large size cantilever with the first resonant frequency of 1687 Hz are designed. The size of the cantilever is 2.1 mm×1 mm, and the thickness is 10 μm. With the wavelength modulation spectrum and second-harmonic detection methods, trace ammonia (NH3) has been measured. The gas detection limits (signal-to-noise ratio = 1) near the wavelength of 1522.5 nm is achieved to be 3 ppb.
Sweetapple, Christine; Fu, Guangtao; Butler, David
2013-09-01
This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ageing of Insensitive DNAN Based Melt-Cast Explosives
2014-08-01
diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Spatial interactions reveal inhibitory cortical networks in human amblyopia.
Wong, Erwin H; Levi, Dennis M; McGraw, Paul V
2005-10-01
Humans with amblyopia have a well-documented loss of sensitivity for first-order, or luminance defined, visual information. Recent studies show that they also display a specific loss of sensitivity for second-order, or contrast defined, visual information; a type of image structure encoded by neurons found predominantly in visual area A18/V2. In the present study, we investigate whether amblyopia disrupts the normal architecture of spatial interactions in V2 by determining the contrast detection threshold of a second-order target in the presence of second-order flanking stimuli. Adjacent flanks facilitated second-order detectability in normal observers. However, in marked contrast, they suppressed detection in each eye of the majority of amblyopic observers. Furthermore, strabismic observers with no loss of visual acuity show a similar pattern of detection suppression. We speculate that amblyopia results in predominantly inhibitory cortical interactions between second-order neurons.
Higher-order mode photonic crystal based nanofluidic sensor
NASA Astrophysics Data System (ADS)
Peng, Wang; Chen, Youping; Ai, Wu
2017-01-01
A higher-order photonic crystal (PC) based nanofluidic sensor, which worked at 532 nm, was designed and demonstrated. A systematical and detailed method for sculpturing a PC sensor for a given peak wavelength value (PWV) and specified materials was illuminated. It was the first time that the higher order mode was used to design PC based nanofluidic sensor, and the refractive index (RI) sensitivity of this sensor had been verified with FDTD simulation software from Lumerical. The enhanced electrical field of higher order mode structure was mostly confined in the channel area, where the enhance field is wholly interacting with the analytes in the channels. The comparison of RI sensitivity between fundamental mode and higher order mode shows the RI variation of higher order mode is 124.5 nm/RIU which is much larger than the fundamental mode. The proposed PC based nanofluidic structure pioneering a novel style for future optofluidic design.
NASA Astrophysics Data System (ADS)
YangDai, Tianyi; Zhang, Li
2016-02-01
Energy dispersive X-ray diffraction (EDXRD) combined with hybrid discriminant analysis (HDA) has been utilized for classifying the liquid materials for the first time. The XRD spectra of 37 kinds of liquid contrabands and daily supplies were obtained using an EDXRD test bed facility. The unique spectra of different samples reveal XRD's capability to distinguish liquid contrabands from daily supplies. In order to create a system to detect liquid contrabands, the diffraction spectra were subjected to HDA which is the combination of principal components analysis (PCA) and linear discriminant analysis (LDA). Experiments based on the leave-one-out method demonstrate that HDA is a practical method with higher classification accuracy and lower noise sensitivity than the other methods in this application. The study shows the great capability and potential of the combination of XRD and HDA for liquid contrabands classification.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Wagner, Angela; Simmons, Alan N; Oberndorfer, Tyson A; Frank, Guido K W; McCurdy-McKinnon, Danyale; Fudge, Julie L; Yang, Tony T; Paulus, Martin P; Kaye, Walter H
2015-12-30
Recent studies show that higher-order appetitive neural circuitry may contribute to restricted eating in anorexia nervosa (AN) and overeating in bulimia nervosa (BN). The purpose of this study was to determine whether sensitization effects might underlie pathologic eating behavior when a taste stimulus is administered repeatedly. Recovered AN (RAN, n=14) and BN (RBN, n=15) subjects were studied in order to avoid the confounding effects of altered nutritional state. Functional magnetic resonance imaging (fMRI) measured higher-order brain response to repeated tastes of sucrose (caloric) and sucralose (non-caloric). To test sensitization, the neuronal response to the first and second administration was compared. RAN patients demonstrated a decreased sensitization to sucrose in contrast to RBN patients who displayed the opposite pattern, increased sensitization to sucrose. However, the latter was not as pronounced as in healthy control women (n=13). While both eating disorder subgroups showed increased sensitization to sucralose, the healthy controls revealed decreased sensitization. These findings could reflect on a neuronal level the high caloric intake of RBN during binges and the low energy intake for RAN. RAN seem to distinguish between high energy and low energy sweet stimuli while RBN do not. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Uncertainty Quantification of Water Quality in Tamsui River in Taiwan
NASA Astrophysics Data System (ADS)
Kao, D.; Tsai, C.
2017-12-01
In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.
Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.
Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael
2012-07-01
To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods
NASA Technical Reports Server (NTRS)
Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.
2003-01-01
Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.
Applying causal mediation analysis to personality disorder research.
Walters, Glenn D
2018-01-01
This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Yong H.
1995-01-01
A study is made of the effect of mesh distortion on the accuracy of transverse shear stresses and their first-order and second-order sensitivity coefficients in multilayered composite panels subjected to mechanical and thermal loads. The panels are discretized by using a two-field degenerate solid element, with the fundamental unknowns consisting of both displacement and strain components, and the displacement components having a linear variation throughout the thickness of the laminate. A two-step computational procedure is used for evaluating the transverse shear stresses. In the first step, the in-plane stresses in the different layers are calculated at the numerical quadrature points for each element. In the second step, the transverse shear stresses are evaluated by using piecewise integration, in the thickness direction, of the three-dimensional equilibrium equations. The same procedure is used for evaluating the sensitivity coefficients of transverse shear stresses. Numerical results are presented showing no noticeable degradation in the accuracy of the in-plane stresses and their sensitivity coefficients with mesh distortion. However, such degradation is observed for the transverse shear stresses and their sensitivity coefficients. The standard of comparison is taken to be the exact solution of the three-dimensional thermoelasticity equations of the panel.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Pourahmad, Saeedeh; Hafizi-Rastani, Iman; Khalili, Hosseinali; Paydar, Shahram
2016-10-17
Generally, traumatic brain injury (TBI) patients do not have a stable condition, particularly after the first week of TBI. Hence, indicating the attributes in prognosis through a prediction model is of utmost importance since it helps caregivers with treatment-decision options, or prepares the relatives for the most-likely outcome. This study attempted to determine and order the attributes in prognostic prediction in TBI patients, based on early clinical findings. A hybrid method was employed, which combines a decision tree (DT) and an artificial neural network (ANN) in order to improve the modeling process. The DT approach was applied as the initial analysis of the network architecture to increase accuracy in prediction. Afterwards, the ANN structure was mapped from the initial DT based on a part of the data. Subsequently, the designed network was trained and validated by the remaining data. 5-fold cross-validation method was applied to train the network. The area under the receiver operating characteristic (ROC) curve, sensitivity, specificity, and accuracy rate were utilized as performance measures. The important attributes were then determined from the trained network using two methods: change of mean squared error (MSE), and sensitivity analysis (SA). The hybrid method offered better results compared to the DT method. The accuracy rate of 86.3 % vs. 82.2 %, sensitivity value of 55.1 % vs. 47.6 %, specificity value of 93.6 % vs. 91.1 %, and the area under the ROC curve of 0.705 vs. 0.695 were achieved for the hybrid method and DT, respectively. However, the attributes' order by DT method was more consistent with the clinical literature. The combination of different modeling methods can enhance their performance. However, it may create some complexities in computations and interpretations. The outcome of the present study could deliver some useful hints in prognostic prediction on the basis of early clinical findings for TBI patients.
Two-dimensional liquid chromatography system for online top-down mass spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Zhixin; Zhao, Rui; Tolic, Nikola
2010-10-01
An online metal-free weak cation exchange-hydrophilic interaction liquid chromatography/reversed phase liquid chromatography (WCX-HILIC/RPLC) system has been developed for sensitive high-throughput top-down mass spectrometry. Analyzing posttranslational modifications (PTMs) of core histones, with focus on histone H4, tested the system. Using ~24 μg of core histones (H4, H2B, H2A and H3) purified from human fibroblasts, 41 H4 isoforms were identified, with the type and locations of PTMs unambiguously mapped for 20 of these variants. Compared to corresponding offline studies reported previously, online WCXHILIC/ RPLC platform offers significant improvement in sensitivity, with several orders of magnitude reduction in sample requirements and reduction inmore » the overall analysis time. To the best of our knowledge, this study represents the first online two-dimensional (2D) LC-MS/MS characterization of core histone mixture at the intact protein level.« less
Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images
NASA Astrophysics Data System (ADS)
Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.
2011-12-01
We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.
Understanding Organics in Meteorites and the Pre-Biotic Environment
NASA Technical Reports Server (NTRS)
Zare, Richard N.
2003-01-01
(1) Refinement of the analytic capabilities of our experiment via characterization of molecule-specific response and the effects upon analysis of the type of sample under investigation; (2) Measurement of polycyclic aromatic hydrocarbons (PAHs) with high sensitivity and spatial resolution within extraterrestrial samples; (3) Investigation of the interstellar reactions of PAHs via the analysis of species formed in systems modeling dust grains and ices; (4) Investigations into the potential role of PAHs in prebiotic and early biotic chemistry via photoreactions of PAHs under simulated prebiotic Earth conditions. To meet these objectives, we use microprobe laser-desorption, laser-ionization mass spectrometry (MuL(exp 2)MS), which is a sensitive, selective, and spatially resolved technique for detection of aromatic compounds. Appendix A presents a description of the MuL(exp 2)MS technique. The initial grant proposal was for a three-year funding period, while the award was given for a one-year interim period. Because of this change in time period, emphasis was shifted from the first research goal, which was more development-oriented, in order to focus more on the other analysis-oriented goals. The progress made on each of the four research areas is given below.
Free energy and phase transition of the matrix model on a plane wave
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadizadeh, Shirin; Ramadanovic, Bojan; Semenoff, Gordon W.
2005-03-15
It has recently been observed that the weakly coupled plane-wave matrix model has a density of states which grows exponentially at high energy. This implies that the model has a phase transition. The transition appears to be of first order. However, its exact nature is sensitive to interactions. In this paper, we analyze the effect of interactions by computing the relevant parts of the effective potential for the Polyakov loop operator in the finite temperature plane-wave matrix model to three-loop order. We show that the phase transition is indeed of first order. We also compute the correction to the Hagedornmore » temperature to order two loops.« less
Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences
Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid
2017-01-01
Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564
Sadeghi Ravesh, Mohammad Hassan; Ahmadi, Hassan; Zehtabian, Gholamreza
2011-08-01
Desertification, land degradation in arid, semi-arid, and dry sub-humid regions, is a global environmental problem. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal de-desertification alternatives is essential. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select the optimal de-desertification alternatives based on the results of interviews with experts in Khezr Abad region, central Iran as the case study. This model was used in Yazd Khezr Abad region to evaluate the efficiency in presentation of better alternatives related to personal and environmental situations. Obtained results indicate that the criterion "proportion and adaptation to the environment" with the weighted average of 33.6% is the most important criterion from experts viewpoints. While prevention alternatives of land usage unsuitable of reveres and conversion with 22.88% mean weight and vegetation cover development and reclamation with 21.9% mean weight are recognized ordinarily as the most important de-desertification alternatives in region. Finally, sensitivity analysis is performed in detail by varying the objective factor decision weight, the priority weight of subjective factors, and the gain factors. After the fulfillment of sensitivity analysis and determination of the most sensitive criteria and alternatives, the former classification and ranking of alternatives does not change so much, and it was observed that unsuitable land use alternative with the preference degree of 22.7% was still in the first order of priority. The final priority of livestock grazing control alternative was replaced with the alternative of modification of ground water harvesting.
Anderson localization of shear waves observed by magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Papazoglou, S.; Klatt, D.; Braun, J.; Sack, I.
2010-07-01
In this letter we present for the first time an experimental investigation of shear wave localization using motion-sensitive magnetic resonance imaging (MRI). Shear wave localization was studied in gel phantoms containing arrays of randomly positioned parallel glass rods. The phantoms were exposed to continuous harmonic vibrations in a frequency range from 25 to 175 Hz, yielding wavelengths on the order of the elastic mean free path, i.e. the Ioffe-Regel criterion of Anderson localization was satisfied. The experimental setup was further chosen such that purely shear horizontal waves were induced to avoid effects due to mode conversion and pressure waves. Analysis of the distribution of shear wave intensity in experiments and simulations revealed a significant deviation from Rayleigh statistics indicating that shear wave energy is localized. This observation is further supported by experiments on weakly scattering samples exhibiting Rayleigh statistics and an analysis of the multifractality of wave functions. Our results suggest that motion-sensitive MRI is a promising tool for studying Anderson localization of time-harmonic shear waves, which are increasingly used in dynamic elastography.
An analysis of the transit times of TrES-1b
NASA Astrophysics Data System (ADS)
Steffen, Jason H.; Agol, Eric
2005-11-01
The presence of a second planet in a known, transiting-planet system will cause the time between transits to vary. These variations can be used to constrain the orbital elements and mass of the perturbing planet. We analyse the set of transit times of the TrES-1 system given in Charbonneau et al. We find no convincing evidence for a second planet in the TrES-1 system from those data. By further analysis, we constrain the mass that a perturbing planet could have as a function of the semi-major axis ratio of the two planets and the eccentricity of the perturbing planet. Near low-order, mean-motion resonances (within ~1 per cent fractional deviation), we find that a secondary planet must generally have a mass comparable to or less than the mass of the Earth - showing that these data are the first to have sensitivity to sub-Earth-mass planets. We compare the sensitivity of this technique to the mass of the perturbing planet with future, high-precision radial velocity measurements.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
(U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
2017-04-26
The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
New infrastructure for studies of transmutation and fast systems concepts
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-09-01
In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
A low power ADS for transmutation studies in fast systems
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-12-01
In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
1989-03-03
address global parameter space mapping issues for first order differential equations. The rigorous criteria for the existence of exact lumping by linear projective transformations was also established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertz, P.R.
Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less
First results of GERDA Phase II and consistency with background models
NASA Astrophysics Data System (ADS)
Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.
2017-01-01
The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
P.B., Mohite; R.B., Pandhare; S.G., Khanage
2012-01-01
Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed. PMID:24312779
The gravitational waves from the first-order phase transition with a dimension-six operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Rong-Gen; Wang, Shao-Jiang; Sasaki, Misao, E-mail: cairg@itp.ac.cn, E-mail: misao@yukawa.kyoto-u.ac.jp, E-mail: schwang@itp.ac.cn
We investigate in details the gravitational wave (GW) from the first-order phase transition (PT) in the extended standard model of particle physics with a dimension-six operator, which is capable of exhibiting the recently discovered slow first-order PT in addition to the usually studied fast first-order PT. To simplify the discussion, it is sufficient to work with an example of a toy model with the sextic term, and we propose an unified description for both slow and fast first-order PTs. We next study the full one-loop effective potential of the model with fixed/running renormalization-group (RG) scales. Compared to the prediction ofmore » GW energy density spectrum from the fixed RG scale, we find that the presence of running RG scale could amplify the peak amplitude by amount of one order of magnitude while shift the peak frequency to the lower frequency regime, and the promising regime of detection within the sensitivity ranges of various space-based GW detectors shrinks down to a lower cut-off value of the sextic term rather than the previous expectation.« less
Phase sensitive diffraction sensor for high sensitivity refractive index measurement
NASA Astrophysics Data System (ADS)
Kumawat, Nityanand; Varma, Manoj; Kumar, Sunil
2018-02-01
In this study a diffraction based sensor has been developed for bio molecular sensing applications and performing assays in real time. A diffraction grating fabricated on a glass substrate produced diffraction patterns both in transmission and reflection when illuminated by a laser diode. We used zeroth order I(0,0) as reference and first order I(0,1) as signal channel and conducted ratiometric measurements that reduced noise by more than 50 times. The ratiometric approach resulted in a very simple instrumentation with very high sensitivity. In the past, we have shown refractive index measurements both for bulk and surface adsorption using the diffractive self-referencing approach. In the current work we extend the same concept to higher diffraction orders. We have considered order I(0,1) and I(1,1) and performed ratiometric measurements I(0,1)/I(1,1) to eliminate the common mode fluctuations. Since orders I(0,1) and I(1,1) behaved opposite to each other, the resulting ratio signal amplitude increased more than twice compared to our previous results. As a proof of concept we used different salt concentrations in DI water. Increased signal amplitude and improved fluid injection system resulted in more than 4 times improvement in detection limit, giving limit of detection 1.3×10-7 refractive index unit (RIU) compared to our previous results. The improved refractive index sensitivity will help significantly for high sensitivity label free bio sensing application in a very cost-effective and simple experimental set-up.
NASA Astrophysics Data System (ADS)
Newman, James Charles, III
1997-10-01
The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.
Zhou, Jian; Huang, Lijun; Fu, Zhongyuan; Sun, Fujun; Tian, Huiping
2016-07-07
We simulated an efficient method for the sensor array of high-sensitivity single-slot photonic crystal nanobeam cavities (PCNCs) on a silicon platform. With the combination of a well-designed photonic crystal waveguide (PhCW) filter and an elaborate single-slot PCNC, a specific high-order resonant mode was filtered for sensing. A 1 × 3 beam splitter carefully established was implemented to split channels and integrate three sensors to realize microarrays. By applying the three-dimensional finite-difference-time-domain (3D-FDTD) method, the sensitivities calculated were S₁ = 492 nm/RIU, S₂ = 244 nm/RIU, and S₃ = 552 nm/RIU, respectively. To the best of our knowledge, this is the first multiplexing design in which each sensor cite features such a high sensitivity simultaneously.
Zhou, Jian; Huang, Lijun; Fu, Zhongyuan; Sun, Fujun; Tian, Huiping
2016-01-01
We simulated an efficient method for the sensor array of high-sensitivity single-slot photonic crystal nanobeam cavities (PCNCs) on a silicon platform. With the combination of a well-designed photonic crystal waveguide (PhCW) filter and an elaborate single-slot PCNC, a specific high-order resonant mode was filtered for sensing. A 1 × 3 beam splitter carefully established was implemented to split channels and integrate three sensors to realize microarrays. By applying the three-dimensional finite-difference-time-domain (3D-FDTD) method, the sensitivities calculated were S1 = 492 nm/RIU, S2 = 244 nm/RIU, and S3 = 552 nm/RIU, respectively. To the best of our knowledge, this is the first multiplexing design in which each sensor cite features such a high sensitivity simultaneously. PMID:27399712
Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com
2015-04-07
We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less
Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson's Disease.
San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B; Lipton, Richard B; Pullman, Seth; Saunders-Pullman, Rachel
2016-01-01
Pre-clinical markers of Parkinson's Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD.
Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson’s Disease
San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A.; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B.; Lipton, Richard B.; Pullman, Seth; Saunders-Pullman, Rachel
2016-01-01
Introduction Pre-clinical markers of Parkinson’s Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. Methods 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. Results All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Conclusion Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD. PMID:27732597
Socio-climatic Exposure of an Afghan Poppy Farmer
NASA Astrophysics Data System (ADS)
Mankin, J. S.; Diffenbaugh, N. S.
2011-12-01
Many posit that climate impacts from anthropogenic greenhouse gas emissions will have consequences for the natural and agricultural systems on which humans rely for food, energy, and livelihoods, and therefore, on stability and human security. However, many of the potential mechanisms of action in climate impacts and human systems response, as well as the differential vulnerabilities of such systems, remain underexplored and unquantified. Here I present two initial steps necessary to characterize and quantify the consequences of climate change for farmer livelihood in Afghanistan, given both climate impacts and farmer vulnerabilities. The first is a conceptual model mapping the potential relationships between Afghanistan's climate, the winter agricultural season, and the country's political economy of violence and instability. The second is a utility-based decision model for assessing farmer response sensitivity to various climate impacts based on crop sensitivities. A farmer's winter planting decision can be modeled roughly as a tradeoff between cultivating the two crops that dominate the winter growing season-opium poppy (a climate tolerant cash crop) and wheat (a climatically vulnerable crop grown for household consumption). Early sensitivity analysis results suggest that wheat yield dominates farmer decision making variability; however, such initial results may dependent on the relative parameter ranges of wheat and poppy yields. Importantly though, the variance in Afghanistan's winter harvest yields of poppy and wheat is tightly linked to household livelihood and thus, is indirectly connected to the wider instability and insecurity within the country. This initial analysis motivates my focused research on the sensitivity of these crops to climate variability in order to project farmer well-being and decision sensitivity in a warmer world.
Introducing sensitive issues and self-care strategies to first year midwifery students.
Cummins, Allison M; Wight, Raechel; Watts, Nicole; Catling, Christine
2018-06-01
first year midwifery students learn early in semester about situations in midwifery where a high level of emotion is expressed, such as taking a sexual history, being faced with the body image changes of pregnancy and working with women in the extreme pain of labour. Commencing students usually have not had exposure to the realities of studying and working in midwifery, and often have an idealised view of midwifery that may lead to attrition from the course. We aimed to equip students with personal and professional tools to discuss sensitive issues in midwifery and promote self-care through the development of two workshops. The first workshop focussed on sensitive issues in midwifery and the second on self-care strategies. quantitative and qualitative data were collected pre and post workshops using a survey. the workshops were developed at one university in New South Wales, Australia. Beginning first year midwifery students MEASUREMENTS: feeling more comfortable, confident and knowledgeable was measured using a paired t-test from the responses on a pre and post workshop survey. Content analysis was performed on the qualitative survey responses. there were significant increases in the students feeling more comfortable to discuss sensitive issues in midwifery following the first workshop. They found meeting new people, respecting opinions, normalizing confronting topics to be valuable and useful. The second workshop found significant differences in being more confident and knowledgeable to access and try new self-care strategies in both their personal and professional life. Students discussed learning to be more mindful in order to prepare for stressful situations. They became aware of their feeling and thoughts when under stress and said they would practice techniques including meditation. the workshops assisted the students to develop peer support, self-care strategies and coping mechanisms when faced with the intimate and sometimes confronting nature of midwifery practice. Through embedding these first year workshops early in the degree we hope to address attrition rates and facilitate the students' to become the compassionate, caring, woman-centred midwives that they envisioned. the workshops have the potential for replication in other universities to support and nurture beginning midwifery students. Copyright © 2018 Elsevier Ltd. All rights reserved.
Among overweight middle-aged men, first-borns have lower insulin sensitivity than second-borns
Albert, Benjamin B.; de Bock, Martin; Derraik, José G. B.; Brennan, Christine M.; Biggs, Janene B.; Hofman, Paul L.; Cutfield, Wayne S.
2014-01-01
We aimed to assess whether birth order affects metabolism and body composition in overweight middle-aged men. We studied 50 men aged 45.6 ± 5.5 years, who were overweight (BMI 27.5 ± 1.7 kg/m2) but otherwise healthy in Auckland, New Zealand. These included 26 first-borns and 24 second-borns. Insulin sensitivity was assessed by the Matsuda method from an oral glucose tolerance test. Other assessments included DXA-derived body composition, lipid profiles, 24-hour ambulatory blood pressure, and carotid intima-media thickness. First-born men were 6.9 kg heavier (p = 0.013) and had greater BMI (29.1 vs 27.5 kg/m2; p = 0.004) than second-borns. Insulin sensitivity in first-born men was 33% lower than in second-borns (4.38 vs 6.51; p = 0.014), despite adjustment for fat mass. There were no significant differences in ambulatory blood pressure, lipid profile or carotid intima-media thickness between first- and second-borns. Thus, first-born adults may be at a greater risk of metabolic and cardiovascular diseases. PMID:24503677
Andrew G. Bunn; Esther Jansma; Mikko Korpela; Robert D. Westfall; James Baldwin
2013-01-01
Mean sensitivity (ζ) continues to be used in dendrochronology despite a literature that shows it to be of questionable value in describing the properties of a time series. We simulate first-order autoregressive models with known parameters and show that ζ is a function of variance and autocorrelation of a time series. We then use 500 random tree-ring...
2015-03-16
shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Computer program for analysis of imperfection sensitivity of ring stiffened shells of revolution
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1971-01-01
A FORTRAN 4 digital computer program is presented for the initial postbuckling and imperfection sensitivity analysis of bifurcation buckling modes for ring-stiffened orthotropic multilayered shells of revolution. The boundary value problem for the second-order contribution to the buckled state was solved by the forward integration technique using the Runge-Kutta method. The effects of nonlinear prebuckling states and live pressure loadings are included.
Relationship between personality traits and vocational choice.
Garcia-Sedeño, Manuel; Navarro, Jose I; Menacho, Inmaculada
2009-10-01
Summary.-The relationship between occupational preferences and personality traits was examined. A randomly chosen sample of 735 students (age range = 17 to 23 years; 50.5% male) in their last year of high school participated in this study. Participants completed Cattell's Sixteen Personality Factor-5 Questionnaire (16PF-5 Questionnaire) and the Kuder-C Professional Tendencies Questionnaire. Initial hierarchical cluster analysis categorized the participants into two groups by Kuder-C vocational factors: one showed a predilection for scientific or technological careers and the other a bias toward the humanities and social sciences. Based on these groupings, differences in 16PF-5 personality traits were analyzed and differences associated with three first-order personality traits (warmth, dominance, and sensitivity), three second-order factors (extraversion, control, and independence), and some areas of professional interest (mechanical, arithmetical artistic, persuasive, and welfare) were identified. The data indicated that there was congruency between personality profiles and vocational interests.
Aide, Nicolas; Talbot, Marjolaine; Fruchart, Christophe; Damaj, Gandhi; Lasnon, Charline
2018-05-01
Our purpose was to evaluate the diagnostic and prognostic value of skeletal textural features (TFs) on baseline FDG PET in diffuse large B cell lymphoma (DLBCL) patients. Eighty-two patients with DLBCL who underwent a bone marrow biopsy (BMB) and a PET scan between December 2008 and December 2015 were included. Two readers blinded to the BMB results visually assessed PET images for bone marrow involvement (BMI) in consensus, and a third observer drew a volume of interest (VOI) encompassing the axial skeleton and the pelvis, which was used to assess skeletal TFs. ROC analysis was used to determine the best TF able to diagnose BMI among four first-order, six second-order and 11 third-order metrics, which was then compared for diagnosis and prognosis in disease-free patients (BMB-/PET-) versus patients considered to have BMI (BMB+/PET-, BMB-/PET+, and BMB+/PET+). Twenty-two out of 82 patients (26.8%) had BMI: 13 BMB-/PET+, eight BMB+/PET+ and one BMB+/PET-. Among the nine BMB+ patients, one had discordant BMI identified by both visual and TF PET assessment. ROC analysis showed that SkewnessH, a first-order metric, was the best parameter for identifying BMI with sensitivity and specificity of 81.8% and 81.7%, respectively. SkewnessH demonstrated better discriminative power over BMB and PET visual analysis for patient stratification: hazard ratios (HR), 3.78 (P = 0.02) versus 2.81 (P = 0.06) for overall survival (OS) and HR, 3.17 (P = 0.03) versus 1.26 (P = 0.70) for progression-free survival (PFS). In multivariate analysis accounting for IPI score, bulky status, haemoglobin and SkewnessH, the only independent predictor of OS was the IPI score, while the only independent predictor of PFS was SkewnessH. The better discriminative power of skeletal heterogeneity for risk stratification compared to BMB and PET visual analysis in the overall population, and more specifically in BMB-/PET- patients, suggests that it can be useful to identify diagnostically overlooked BMI.
NASA Astrophysics Data System (ADS)
Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez
2005-01-01
An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.
NASA Technical Reports Server (NTRS)
Jegley, Dawn C.
1987-01-01
Buckling loads of thick-walled orthotropic and anisotropic simply supported circular cylinders are predicted using a higher-order transverse-shear deformation theory. A comparison of buckling loads predicted by the conventional first-order transverse-shear deformation theory and the higher-order theory show that the additional allowance for transverse shear deformation has a negligible effect on the predicted buckling loads of medium-thick metallic isotropic cylinders. However, the higher-order theory predicts buckling loads which are significantly lower than those predicted by the first-order transverse-shear deformation theory for certain short, thick-walled cylinders which have low through-the-thickness shear moduli. A parametric study of the effects of ply orientation on the buckling load of axially compressed cylinders indicates that laminates containing 45 degree plies are most sensitive to transverse-shear deformation effects. Interaction curves for buckling loads of cylinders subjected to axial compressive and external pressure loadings indicate that buckling loads due to external pressure loadings are as sensitive to transverse-shear deformation effects as buckling loads due to axial compressive loadings. The effects of anisotropy are important over a much wider range of cylinder geometries than the effects of transverse shear deformation.
NASA Astrophysics Data System (ADS)
Calderón Bustillo, Juan; Salemi, Francesco; Dal Canton, Tito; Jani, Karan P.
2018-01-01
The sensitivity of gravitational wave searches for binary black holes is estimated via the injection and posterior recovery of simulated gravitational wave signals in the detector data streams. When a search reports no detections, the estimated sensitivity is then used to place upper limits on the coalescence rate of the target source. In order to obtain correct sensitivity and rate estimates, the injected waveforms must be faithful representations of the real signals. Up to date, however, injected waveforms have neglected radiation modes of order higher than the quadrupole, potentially biasing sensitivity and coalescence rate estimates. In particular, higher-order modes are known to have a large impact in the gravitational waves emitted by intermediate-mass black holes binaries. In this work, we evaluate the impact of this approximation in the context of two search algorithms run by the LIGO Scientific Collaboration in their search for intermediate-mass black hole binaries in the O1 LIGO Science Run data: a matched filter-based pipeline and a coherent unmodeled one. To this end, we estimate the sensitivity of both searches to simulated signals for nonspinning binaries including and omitting higher-order modes. We find that omission of higher-order modes leads to biases in the sensitivity estimates which depend on the masses of the binary, the search algorithm, and the required level of significance for detection. In addition, we compare the sensitivity of the two search algorithms across the studied parameter space. We conclude that the most recent LIGO-Virgo upper limits on the rate of coalescence of intermediate-mass black hole binaries are conservative for the case of highly asymmetric binaries. However, the tightest upper limits, placed for nearly equal-mass sources, remain unchanged due to the small contribution of higher modes to the corresponding sources.
Standing Vs Supine; Does it Matter in Cough Stress Testing?
Patnam, Radhika; Edenfield, Autumn L; Swift, Steven E
The aim of this study was to compare the sensitivity of cough stress test in the standing versus supine position in the evaluation of incontinent females. We performed a prospective observational study of women with the chief complaint of urinary incontinence (UI) undergoing a provocative cough stress test (CST). Subjects underwent both a standing and a supine CST. Testing order was randomized via block randomization. Cough stress test was performed in a standard method via backfill of 200 mL or until the subject described strong urge. The subjects were asked to cough, and the physician documented urine leakage by direct observation. The gold standard for stress UI diagnosis was a positive CST in either position. Sixty subjects were enrolled, 38 (63%) tested positive on any CST, with 38 (63%) positive on standing compared with 29 (28%) positive on supine testing. Nine women (15%) had positive standing and negative supine testing. No subjects had negative standing with positive supine testing. There were no significant differences in positive tests between the 2 randomized groups (standing first and supine second vs. supine first and standing second). When compared with the gold standard of any positive provocative stress test, the supine CST has a sensitivity of 76%, whereas the standing CST has a sensitivity of 100%. The standing CST is more sensitive than the supine CST and should be performed in any patient with a complaint of UI and negative supine CST. The order of testing either supine or standing first does not affect the results.
The structure of paranoia in the general population.
Bebbington, Paul E; McBride, Orla; Steel, Craig; Kuipers, Elizabeth; Radovanovic, Mirjana; Brugha, Traolach; Jenkins, Rachel; Meltzer, Howard I; Freeman, Daniel
2013-06-01
Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.
Design and development of second order MEMS sound pressure gradient sensor
NASA Astrophysics Data System (ADS)
Albahri, Shehab
The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Electromagnetic imaging with an arbitrarily oriented magnetic dipole
NASA Astrophysics Data System (ADS)
Guillemoteau, Julien; Sailhac, Pascal; Behaegel, Mickael
2013-04-01
We present the theoretical background for the geophysical EM analysis with arbitrarily oriented magnetic dipoles. The first application of such a development is that we would now be able to correct the data when they are not acquired in accordance to the actual interpretation methods. In order to illustrate this case, we study the case of airborne TEM measurements over an inclined ground. This context can be encountered if the measurements are made in mountain area. We show in particular that transient central loop helicopter borne magnetic data should be corrected by a factor proportional to the angle of the slope under the system. In addition, we studied the sensitivity function of a grounded multi-angle frequency domain system. Our development leads to a general Jacobian kernel that could be used for all the induction number and all the position/orientation of both transmitter and receiver in the air layer. Indeed, if one could design a system controlling the angles of Tx and Rx, the present development would allow to interpret such a data set and enhance the ground analysis, especially in order to constrain the 3D anisotropic inverse problem.
Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr
In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.
The effect of presentation rate on implicit sequence learning in aging.
Foster, Chris M; Giovanello, Kelly S
2017-02-01
Implicit sequence learning is thought to be preserved in aging when the to-be learned associations are first-order; however, when associations are second-order, older adults (OAs) tend to experience deficits as compared to young adults (YAs). Two experiments were conducted using a first (Experiment 1) and second-order (Experiment 2) serial-reaction time task. Stimuli were presented at a constant rate of either 800 milliseconds (fast) or 1200 milliseconds (slow). Results indicate that both age groups learned first-order dependencies equally in both conditions. OAs and YAs also learned second-order dependencies, but the learning of lag-2 information was significantly impacted by the rate of presentation for both groups. OAs showed significant lag-2 learning in slow condition while YAs showed significant lag-2 learning in the fast condition. The sensitivity of implicit sequence learning to the rate of presentation supports the idea that OAs and YAs different processing speeds impact the ability to build complex associations across time and intervening events.
Dynamics of neurons controlling movements of a locust hind leg. III. Extensor tibiae motor neurons.
Newland, P L; Kondoh, Y
1997-06-01
Imposed movements of the apodeme of the femoral chordotonal organ (FeCO) of the locust hind leg elicit resistance reflexes in extensor and flexor tibiae motor neurons. The synaptic responses of the fast and slow extensor tibiae motor neurons (FETi and SETi, respectively) and the spike responses of SETi were analyzed with the use of the Wiener kernel white noise method to determine their response properties. The first-order Wiener kernels computed from soma recordings were essentially monophasic, or low passed, indicating that the motor neurons were primarily sensitive to the position of the tibia about the femorotibial joint. The responses of both extensor motor neurons had large nonlinear components. The second-order kernels of the synaptic responses of FETi and SETi had large on-diagonal peaks with two small off-diagonal valleys. That of SETi had an additional elongated valley on the diagonal, which was accompanied by two off-diagonal depolarizing peaks at a cutoff frequency of 58 Hz. These second-order components represent a half-wave rectification of the position-sensitive depolarizing response in FETi and SETi, and a delayed inhibitory input to SETi, indicating that both motor neurons were directionally sensitive. Model predictions of the responses of the motor neurons showed that the first-order (linear) characterization poorly predicted the actual responses of FETi and SETi to FeCO stimulation, whereas the addition of the second-order (nonlinear) term markedly improved the performance of the model. Simultaneous recordings from the soma and a neuropilar process of FETi showed that its synaptic responses to FeCO stimulation were phase delayed by about -30 degrees at 20 Hz, and reduced in amplitude by 30-40% when recorded in the soma. Similar configurations of the first and second-order kernels indicated that the primary process of FETi acted as a low-pass filter. Cross-correlation between a white noise stimulus and a unitized spike discharge of SETi again produced well-defined first- and second-order kernels that showed that the SETi spike response was also dependent on positional inputs. An elongated negative valley on the diagonal, characteristic of the second-order kernel of the synaptic response in SETi, was absent in the kernel from the spike component, suggesting that information is lost in the spike production process. The functional significance of these results is discussed in relation to the behavior of the locust.
Higher order aberrations and relative risk of symptoms after LASIK.
Sharma, Munish; Wachler, Brian S Boxer; Chan, Colin C K
2007-03-01
To understand what level of higher order aberrations increases the relative risk of visual symptoms in patients after myopic LASIK. This study was a retrospective comparative analysis of 103 eyes of 62 patients divided in two groups, matched for age, gender, pupil size, and spherical equivalent refraction. The symptomatic group comprised 36 eyes of 24 patients after conventional LASIK with different laser systems evaluated in our referral clinic and the asymptomatic control group consisted of 67 eyes of 38 patients following LADARVision CustomCornea wavefront LASIK. Comparative analysis was performed for uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), contrast sensitivity, refractive cylinder, and higher order aberrations. Wavefront analysis was performed with the LADARWave aberrometer at 6.5-mm analysis for all eyes. Blurring of vision was the most common symptom (41.6%) followed by double image (19.4%), halo (16.7%), and fluctuation in vision (13.9%) in symptomatic patients. A statistically significant difference was noted in UCVA (P = .001), BSCVA (P = .001), contrast sensitivity (P < .001), and manifest cylinder (P = .001) in the two groups. The percentage difference between the symptomatic and control group mean root-mean-square (RMS) values ranged from 157% to 206% or 1.57 to 2.06 times greater. Patients with visual symptoms after LASIK have significantly lower visual acuity and contrast sensitivity and higher mean RMS values for higher order aberrations than patients without symptoms. Root-mean-square values of greater than two times the normal after-LASIK population for any given laser platform may increase the relative risk of symptoms.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
NASA Astrophysics Data System (ADS)
Gottlieb, C.; Millar, S.; Günther, T.; Wilsch, G.
2017-06-01
For the damage assessment of reinforced concrete structures the quantified ingress profiles of harmful species like chlorides, sulfates and alkali need to be determined. In order to provide on-site analysis of concrete a fast and reliable method is necessary. Low transition probabilities as well as the high ionization energies for chlorine and sulfur in the near-infrared range makes the detection of Cl I and S I in low concentrations a difficult task. For the on-site analysis a mobile LIBS-system (λ = 1064 nm, Epulse ≤ 3 mJ, τ = 1.5 ns) with an automated scanner has been developed at BAM. Weak chlorine and sulfur signal intensities do not allow classical univariate analysis for process data derived from the mobile system. In order to improve the analytical performance multivariate analysis like PLS-R will be presented in this work. A comparison to standard univariate analysis will be carried out and results covering important parameters like detection and quantification limits (LOD, LOQ) as well as processing variances will be discussed (Allegrini and Olivieri, 2014 [1]; Ostra et al., 2008 [2]). It will be shown that for the first time a low cost mobile system is capable of providing reproducible chlorine and sulfur analysis on concrete by using a low sensitive system in combination with multivariate evaluation.
Expression of pH-sensitive green fluorescent protein in Arabidopsis thaliana
NASA Technical Reports Server (NTRS)
Moseyko, N.; Feldman, L. J.
2001-01-01
This is the first report on using green fluorescent protein (GFP) as a pH reporter in plants. Proton fluxes and pH regulation play important roles in plant cellular activity and therefore, it would be extremely helpful to have a plant gene reporter system for rapid, non-invasive visualization of intracellular pH changes. In order to develop such a system, we constructed three vectors for transient and stable transformation of plant cells with a pH-sensitive derivative of green fluorescent protein. Using these vectors, transgenic Arabidopsis thaliana and tobacco plants were produced. Here the application of pH-sensitive GFP technology in plants is described and, for the first time, the visualization of pH gradients between different developmental compartments in intact whole-root tissues of A. thaliana is reported. The utility of pH-sensitive GFP in revealing rapid, environmentally induced changes in cytoplasmic pH in roots is also demonstrated.
Tahmasbi, Vahid; Ghoreishi, Majid; Zolfaghari, Mojtaba
2017-11-01
The bone drilling process is very prominent in orthopedic surgeries and in the repair of bone fractures. It is also very common in dentistry and bone sampling operations. Due to the complexity of bone and the sensitivity of the process, bone drilling is one of the most important and sensitive processes in biomedical engineering. Orthopedic surgeries can be improved using robotic systems and mechatronic tools. The most crucial problem during drilling is an unwanted increase in process temperature (higher than 47 °C), which causes thermal osteonecrosis or cell death and local burning of the bone tissue. Moreover, imposing higher forces to the bone may lead to breaking or cracking and consequently cause serious damage. In this study, a mathematical second-order linear regression model as a function of tool drilling speed, feed rate, tool diameter, and their effective interactions is introduced to predict temperature and force during the bone drilling process. This model can determine the maximum speed of surgery that remains within an acceptable temperature range. Moreover, for the first time, using designed experiments, the bone drilling process was modeled, and the drilling speed, feed rate, and tool diameter were optimized. Then, using response surface methodology and applying a multi-objective optimization, drilling force was minimized to sustain an acceptable temperature range without damaging the bone or the surrounding tissue. In addition, for the first time, Sobol statistical sensitivity analysis is used to ascertain the effect of process input parameters on process temperature and force. The results show that among all effective input parameters, tool rotational speed, feed rate, and tool diameter have the highest influence on process temperature and force, respectively. The behavior of each output parameters with variation in each input parameter is further investigated. Finally, a multi-objective optimization has been performed considering all the aforementioned parameters. This optimization yielded a set of data that can considerably improve orthopedic osteosynthesis outcomes.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Solà-Vázquez, Auristela; Lara-Gonzalo, Azucena; Costa-Fernández, José M; Pereiro, Rosario; Sanz-Medel, Alfredo
2010-05-01
A tuneable microsecond pulsed direct current glow discharge (GD)-time-of-flight mass spectrometer MS(TOF) developed in our laboratory was coupled to a gas chromatograph (GC) to obtain sequential collection of the mass spectra, at different temporal regimes occurring in the GD pulses, during elution of the analytes. The capabilities of this set-up were explored using a mixture of volatile organic compounds of environmental concern: BrClCH, Cl(3)CH, Cl(4)C, BrCl(2)CH, Br(2)ClCH, Br(3)CH. The experimental parameters of the GC-pulsed GD-MS(TOF) prototype were optimized in order to separate appropriately and analyze the six selected organic compounds, and two GC carrier gases, helium and nitrogen, were evaluated. Mass spectra for all analytes were obtained in the prepeak, plateau and afterpeak temporal regimes of the pulsed GD. Results showed that helium offered the best elemental sensitivity, while nitrogen provided higher signal intensities for fragments and molecular peaks. The analytical performance characteristics were also worked out for each analyte. Absolute detection limits obtained were in the order of ng. In a second step, headspace solid phase microextraction (HS SPME), as sample preparation and preconcentration technique, was evaluated for the quantification of the compounds under study, in order to achieve the required analytical sensitivity for trihalomethanes European Union (EU) environmental legislation. The analytical figures of merit obtained using the proposed methodology showed rather good detection limits (between 2 and 13 microg L(-1) depending on the analyte). In fact, the developed methodology met the EU legislation requirements (the maximum level permitted in tap water for the "total trihalomethanes" is set at 100 microg L(-1)). Real analysis of drinking water and river water were successfully carried out. To our knowledge this is the first application of GC-pulsed GD-MS(TOF) for the analysis of real samples. Its ability to provide elemental, fragments and molecular information of the organic compounds is demonstrated.
Sensitivity of charge transport measurements to local inhomogeneities
NASA Astrophysics Data System (ADS)
Koon, Daniel; Wang, Fei; Hjorth Petersen, Dirch; Hansen, Ole
2012-02-01
We derive analytic expressions for the sensitivity of resistive and Hall measurements to local variations in a specimen's material properties in the combined linear limit of both small magnetic fields and small perturbations, presenting exact, algebraic expressions both for four-point probe measurements on an infinite plane and for symmetric, circular van der Pauw discs. We then generalize the results to obtain corrections to the sensitivities both for finite magnetic fields and for finite perturbations. Calculated functions match published results and computer simulations, and provide an intuitive, visual explanation for experimental misassignment of carrier type in n-type ZnO and agree with published experimental results for holes in a uniform material. These results simplify calculation and plotting of the sensitivities on an NxN grid from a problem of order N^5 to one of order N^3 in the arbitrary case and of order N^2 in the handful of cases that can be solved exactly, putting a powerful tool for inhomogeneity analysis in the hands of the researcher: calculation of the sensitivities requires little more than the solution of Laplace's equation on the specimen geometry.
Study of constraints in using household NaCl salt for retrospective dosimetry
NASA Astrophysics Data System (ADS)
Elashmawy, M.
2018-05-01
Thermoluminescence (TL) characteristics of 5 different household NaCl salts and one analytical salt were determined to investigate the possible factors that affect the reliability of using household salt for retrospective dosimetry. Salts' TL sensitivities were found to be particle-size dependent and approached saturation at the largest size, whereas for salts that have the same particle size, the TL sensitivity depended on their origin. TL dependence on the particle size interprets significant variations in TL response reported in the literature for the same salt patch. The first TL readout indicated that all salts have similar glow curves with one distinctive peak. Typical second TL readout at two different doses showed a dramatic decrease in TL sensitivity associated with a significant change in the glow curve structure possessing two prominent peaks. Glow curve deconvolution (GCD) of the first TL readout for all salts yielded 6 individual glow peaks of first-order kinetics, whereas in GCD of second TL readouts, 5 individual glow peaks of second-order kinetics were obtained. Similarities in the glow curve structures of the first and second TL readouts suggest that additives such as KIO3 and MgCO3 have no effect on the TL process. Fading effect was evaluated for the salt of highest TL sensitivity, and it was found that the integral TL intensity decreased gradually and lost 40% of its initial value over 2 weeks, after which it remained constant. Results conclude that a household salt cannot be used for retrospective dosimetry without considering certain constraints such as the salt's origin and particle size. Furthermore, preparedness for radiological accidents and accurate dose reconstructions require that most of the commonly distributed household salt brands should be calibrated in advance and stored in a repository to be recalled in case of accidents.
Assessing School Work Culture: A Higher-Order Analysis and Strategy.
ERIC Educational Resources Information Center
Johnson, William L.; Johnson, Annabel M.; Zimmerman, Kurt J.
This paper reviews a work culture productivity model and reports the development of a work culture instrument based on the culture productivity model. Higher order principal components analysis was used to assess work culture, and a third-order factor analysis shows how the first-order factors group into higher-order factors. The school work…
Chernetsova, Elena S; Revelsky, Alexander I; Morlock, Gertrud E
2011-08-30
The present study is a first step towards the unexplored capabilities of Direct Analysis in Real Time (DART) mass spectrometry (MS) arising from the possibility of the desorption at an angle: scanning analysis of surfaces, including the coupling of thin-layer chromatography (TLC) with DART-MS, and a more sensitive analysis due to the preliminary concentration of analytes dissolved in large volumes of liquids on glass surfaces. In order to select the most favorable conditions for DART-MS analysis, proper positioning of samples is important. Therefore, a simple and cheap technique for the visualization of the impact region of the DART gas stream onto a substrate was developed. A filter paper or TLC plate, previously loaded with the analyte, was immersed in a derivatization solution. On this substrate, owing to the impact of the hot DART gas, reaction of the analyte to a colored product occurred. An improved capability of detection of DART-MS for the analysis of liquids was demonstrated by applying large volumes of model solutions of coumaphos into small glass vessels and drying these solutions prior to DART-MS analysis under ambient conditions. This allowed the introduction of, by up to more than two orders of magnitude, increased quantities of analyte compared with the conventional DART-MS analysis of liquids. Through this improved detectability, the capabilities of DART-MS in trace analysis could be strengthened. Copyright © 2011 John Wiley & Sons, Ltd.
Examination of directed flow as a signal for a phase transition in relativistic nuclear collisions
NASA Astrophysics Data System (ADS)
Steinheimer, J.; Auvinen, J.; Petersen, H.; Bleicher, M.; Stöcker, H.
2014-05-01
The sign change of the slope of the directed flow of baryons has been predicted as a signal for a first order phase transition within fluid dynamical calculations. Recently, the directed flow of identified particles was measured by the STAR Collaboration in the beam energy scan program. In this article, we examine the collision energy dependence of directed flow v1 in fluid dynamical model descriptions of heavy ion collisions for √sNN =3-20 GeV. The first step is to reproduce the existing predictions within pure fluid dynamical calculations. As a second step we investigate the influence of the order of the phase transition on the anisotropic flow within a state-of-the-art hybrid approach that describes other global observables reasonably well. We find that, in the hybrid approach, there seems to be no sensitivity of the directed flow on the equation of state and in particular on the existence of a first order phase transition. In addition, we explore more subtle sensitivities such as the Cooper-Frye transition criterion and discuss how momentum conservation and the definition of the event plane affects the results. At this point, none of our calculations matches qualitatively the behavior of the STAR data; the values of the slopes are always larger than in the data.
Support for Online Calibration in the ALICE HLT Framework
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker;
2017-10-01
The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.
Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.
NASA Astrophysics Data System (ADS)
Nagai, Haruyasu
2002-02-01
This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.
Synchronization analysis of voltage-sensitive dye imaging during focal seizures in the rat neocortex
NASA Astrophysics Data System (ADS)
Takeshita, Daisuke; Bahar, Sonya
2011-12-01
Seizures are often assumed to result from an excess of synchronized neural activity. However, various recent studies have suggested that this is not necessarily the case. We investigate synchronization during focal neocortical seizures induced by injection of 4-aminopyridine (4AP) in the rat neocortex in vivo. Neocortical activity is monitored by field potential recording and by the fluorescence of the voltage-sensitive dye RH-1691. After removal of artifacts, the voltage-sensitive dye (VSD) signal is analyzed using the nonlinear dynamics-based technique of stochastic phase synchronization in order to determine the degree of synchronization within the neocortex during the development and spread of each seizure event. Results show a large, statistically significant increase in synchronization during seizure activity. Synchrony is typically greater between closer pixel pairs during a seizure event; the entire seizure region is synchronized almost exactly in phase. This study represents, to our knowledge, the first application of synchronization analysis methods to mammalian VSD imaging in vivo. Our observations indicate a clear increase in synchronization in this model of focal neocortical seizures across a large area of the neocortex; a sharp increase in synchronization during seizure events was observed in all 37 seizures imaged. The results are consistent with a recent computational study which simulates the effect of 4AP in a neocortical neuron model.
Polarization rotation enhancement and scattering mechanisms in waveguide magnetophotonic crystals
NASA Astrophysics Data System (ADS)
Levy, Miguel; Li, Rong
2006-09-01
Intermodal coupling in photonic band gap optical channels in magnetic garnet films is found to leverage the nonreciprocal polarization rotation. Forward fundamental-mode to high-order mode backscattering yields the largest rotations. The underlying mechanism is traced to the dependence of the grating-coupling constant on the modal refractive index and profile of the propagating beam. Large changes in polarization near the band edges are observed in first and second orders. Extreme sensitivity to linear birefringence exists in second order.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
The Higgs vacuum uplifted: revisiting the electroweak phase transition with a second Higgs doublet
NASA Astrophysics Data System (ADS)
Dorsch, G. C.; Huber, S. J.; Mimasu, K.; No, J. M.
2017-12-01
The existence of a second Higgs doublet in Nature could lead to a cosmological first order electroweak phase transition and explain the origin of the matter-antimatter asymmetry in the Universe. We explore the parameter space of such a two-Higgs-doublet-model and show that a first order electroweak phase transition strongly correlates with a significant uplifting of the Higgs vacuum w.r.t. its Standard Model value. We then obtain the spectrum and properties of the new scalars H 0, A 0 and H ± that signal such a phase transition, showing that the decay A 0 → H 0 Z at the LHC and a sizable deviation in the Higgs self-coupling λ hhh from its SM value are sensitive indicators of a strongly first order electroweak phase transition in the 2HDM.
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1980-01-01
Specific configurations of first and second order all digital phase locked loops are analyzed for both ideal and additive white gaussian noise inputs. In addition, a design for a hardware digital phase locked loop capable of either first or second order operation is presented along with appropriate experimental data obtained from testing of the hardware loop. All parameters chosen for the analysis and the design of the digital phase locked loop are consistent with an application to an Omega navigation receiver although neither the analysis nor the design are limited to this application.
Material and morphology parameter sensitivity analysis in particulate composite materials
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyu; Oskay, Caglar
2017-12-01
This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.
Khan, Farman U; Qamar, Shamsul
2017-05-01
A set of analytical solutions are presented for a model describing the transport of a solute in a fixed-bed reactor of cylindrical geometry subjected to the first (Dirichlet) and third (Danckwerts) type inlet boundary conditions. Linear sorption kinetic process and first-order decay are considered. Cylindrical geometry allows the use of large columns to investigate dispersion, adsorption/desorption and reaction kinetic mechanisms. The finite Hankel and Laplace transform techniques are adopted to solve the model equations. For further analysis, statistical temporal moments are derived from the Laplace-transformed solutions. The developed analytical solutions are compared with the numerical solutions of high-resolution finite volume scheme. Different case studies are presented and discussed for a series of numerical values corresponding to a wide range of mass transfer and reaction kinetics. A good agreement was observed in the analytical and numerical concentration profiles and moments. The developed solutions are efficient tools for analyzing numerical algorithms, sensitivity analysis and simultaneous determination of the longitudinal and transverse dispersion coefficients from a laboratory-scale radial column experiment. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
Zhao, Ming; Lin, Jing; Xu, Xiaoqiang; Li, Xuejun
2014-01-01
When operating under harsh condition (e.g., time-varying speed and load, large shocks), the vibration signals of rolling element bearings are always manifested as low signal noise ratio, non-stationary statistical parameters, which cause difficulties for current diagnostic methods. As such, an IMF-based adaptive envelope order analysis (IMF-AEOA) is proposed for bearing fault detection under such conditions. This approach is established through combining the ensemble empirical mode decomposition (EEMD), envelope order tracking and fault sensitive analysis. In this scheme, EEMD provides an effective way to adaptively decompose the raw vibration signal into IMFs with different frequency bands. The envelope order tracking is further employed to transform the envelope of each IMF to angular domain to eliminate the spectral smearing induced by speed variation, which makes the bearing characteristic frequencies more clear and discernible in the envelope order spectrum. Finally, a fault sensitive matrix is established to select the optimal IMF containing the richest diagnostic information for final decision making. The effectiveness of IMF-AEOA is validated by simulated signal and experimental data from locomotive bearings. The result shows that IMF-AEOA could accurately identify both single and multiple faults of bearing even under time-varying rotating speed and large extraneous shocks. PMID:25353982
Impact of Soil Moisture Initialization on Seasonal Weather Prediction
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Suarez, Max J.; Houser, Paul (Technical Monitor)
2002-01-01
The potential role of soil moisture initialization in seasonal forecasting is illustrated through ensembles of simulations with the NASA Seasonal-to-Interannual Prediction Project (NSIPP) model. For each boreal summer during 1997-2001, we generated two 16-member ensembles of 3-month simulations. The first, "AMIP-style" ensemble establishes the degree to which a perfect prediction of SSTs would contribute to the seasonal prediction of precipitation and temperature over continents. The second ensemble is identical to the first, except that the land surface is also initialized with "realistic" soil moisture contents through the continuous prior application (within GCM simulations leading up to the start of the forecast period) of a daily observational precipitation data set and the associated avoidance of model drift through the scaling of all surface prognostic variables. A comparison of the two ensembles shows that soil moisture initialization has a statistically significant impact on summertime precipitation and temperature over only a handful of continental regions. These regions agree, to first order, with regions that satisfy three conditions: (1) a tendency toward large initial soil moisture anomalies, (2) a strong sensitivity of evaporation to soil moisture, and (3) a strong sensitivity of precipitation to evaporation. The degree to which the initialization improves forecasts relative to observations is mixed, reflecting a critical need for the continued development of model parameterizations and data analysis strategies.
New Swift UVOT data reduction tools and AGN variability studies
NASA Astrophysics Data System (ADS)
Gelbord, Jonathan; Edelson, Rick
2017-08-01
The efficient slewing and flexible scheduling of the Swift observatory have made it possible to conduct monitoring campaigns that are both intensive and prolonged, with multiple visits per day sustained over weeks and months. Recent Swift monitoring campaigns of a handful of AGN provide simultaneous optical, UV and X-ray light curves that can be used to measure variability and interband correlations on timescales from hours to months, providing new constraints for the structures within AGN and the relationships between them. However, the first of these campaigns, thrice-per-day observations of NGC 5548 through four months, revealed anomalous dropouts in the UVOT light curves (Edelson, Gelbord, et al. 2015). We identified the cause as localized regions of reduced detector sensitivity that are not corrected by standard processing. Properly interpreting the light curves required identifying and screening out the affected measurements.We are now using archival Swift data to better characterize these low sensitivity regions. Our immediate goal is to produce a more complete mapping of their locations so that affected measurements can be identified and screened before further analysis. Our longer-term goal is to build a more quantitative model of the effect in order to define a correction for measured fluxes, if possible, or at least to put limits on the impact upon any observation. We will combine data from numerous background stars in well-monitored fields in order to quantify the strength of the effect as a function of filter as well as location on the detector, and to test for other dependencies such as evolution over time or sensitivity to the count rate of the target. Our UVOT sensitivity maps and any correction tools will be provided to the community of Swift users.
NASA Astrophysics Data System (ADS)
Kar, R. C.; Sujata, T.
1992-04-01
Simple and combination resonances of a rotating cantilever beam with an end mass subjected to a transverse follower parametric excitation have been studied. The method of multiple scales is used to obtain the resonance zones of the first and second order for various values of the system parameters. It is concluded that first order combination resonances of sum- and difference-type are predominant. Higher tip mass and inertia parameters may either stabilize or destabilize the system. The increase of rotational speed, hub radius, and warping rigidity makes the beam less sensitive to periodic forces.
Determination of triclosan in antiperspirant gels by first-order derivative spectrophotometry.
Du, Lina; Li, Miao; Jin, Yiguang
2011-10-01
A first-order derivative UV spectrophotometric method was developed to determine triclosan, a broad-spectrum antimicrobial agent, in health care products containing fragrances which could interfere the determination as impurities. Different extraction methods were compared. Triclosan was extracted with chloroform and diluted with ethanol followed by the derivative spectrophotometric measurement. The interference of fragrances was completely eliminated. The calibration graph was found to be linear in the range of 7.5-45 microg x mL(-1). The method is simple, rapid, sensitive and proper to determine triclosan in fragrance-containing health care products.
Application of optimal control strategies to HIV-malaria co-infection dynamics
NASA Astrophysics Data System (ADS)
Fatmawati; Windarto; Hanif, Lathifah
2018-03-01
This paper presents a mathematical model of HIV and malaria co-infection transmission dynamics. Optimal control strategies such as malaria preventive, anti-malaria and antiretroviral (ARV) treatments are considered into the model to reduce the co-infection. First, we studied the existence and stability of equilibria of the presented model without control variables. The model has four equilibria, namely the disease-free equilibrium, the HIV endemic equilibrium, the malaria endemic equilibrium, and the co-infection equilibrium. We also obtain two basic reproduction ratios corresponding to the diseases. It was found that the disease-free equilibrium is locally asymptotically stable whenever their respective basic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. sic reproduction numbers are less than one. We also conducted a sensitivity analysis to determine the dominant factor controlling the transmission. Then, the optimal control theory for the model was derived analytically by using Pontryagin Maximum Principle. Numerical simulations of the optimal control strategies are also performed to illustrate the results. From the numerical results, we conclude that the best strategy is to combine the malaria prevention and ARV treatments in order to reduce malaria and HIV co-infection populations.
Klink, Dennis; Schmitz, Oliver Johannes
2016-01-05
Atmospheric-pressure laser ionization mass spectrometry (APLI-MS) is a powerful method for the analysis of polycyclic aromatic hydrocarbon (PAH) molecules, which are ionized in a selective and highly sensitive way via resonance-enhanced multiphoton ionization. APLI was presented in 2005 and has been hyphenated successfully to chromatographic separation techniques like high performance liquid chromatography (HPLC) and gas chromatography (GC). In order to expand the portfolio of chromatographic couplings to APLI, a new hyphenation setup of APLI and supercritical-fluid chromatography (SFC) was constructed and aim of this work. Here, we demonstrate the first hyphenation of SFC and APLI in a simple designed way with respect to different optimization steps to ensure a sensitive analysis. The new setup permits qualitative and quantitative determination of native and also more polar PAH molecules. As a result of the altered ambient characteristics within the source enclosure, the quantification of 1-hydroxypyrene (1-HP) in human urine is possible without prior derivatization. The limit of detection for 1-HP by SFC-APLI-TOF(MS) was found to be 0.5 μg L(-1), which is lower than the 1-HP concentrations found in exposed persons.
Integrated model for pricing, delivery time setting, and scheduling in make-to-order environments
NASA Astrophysics Data System (ADS)
Garmdare, Hamid Sattari; Lotfi, M. M.; Honarvar, Mahboobeh
2018-03-01
Usually, in make-to-order environments which work only in response to the customer's orders, manufacturers for maximizing the profits should offer the best price and delivery time for an order considering the existing capacity and the customer's sensitivity to both the factors. In this paper, an integrated approach for pricing, delivery time setting and scheduling of new arrival orders are proposed based on the existing capacity and accepted orders in system. In the problem, the acquired market demands dependent on the price and delivery time of both the manufacturer and its competitors. A mixed-integer non-linear programming model is presented for the problem. After converting to a pure non-linear model, it is validated through a case study. The efficiency of proposed model is confirmed by comparing it to both the literature and the current practice. Finally, sensitivity analysis for the key parameters is carried out.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.
Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.
An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm
Qin, Qin
2017-01-01
R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method. PMID:29104745
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weismueller, P.H.; Henze, E.; Adam, W.E.
1986-01-01
In order to test the diagnostic potential of phase analysis of radionuclide ventriculography (RNV) for localizing accessory bundles in Wolff-Parkinson-White (WPW) syndrome, 24 experimental runs were performed in three open chest instrumented dogs. After a baseline study, WPW syndrome was simulated by stimulation at seven different sites around the base of the ventricles, and RNV's were obtained. Subsequent data processing including Fourier transformation allowed the localization of the site of the first inward motion of the ventricles by an isophasic wave display. In sinus rhythm, the septum contracted first. During ectopic premature ventricular stimulation by triggering the atrial signal, themore » phase scan was altered only when the stimulus was applied earlier than 20 ms before the expected QRS complex during sinus rhythm. During stimulation with fixed frequency, only the left lateral positions of the premature stimulation were detected by phase analysis with a sensitivity of 86%. Neither the antero- or posteroseptal nor the right ventricular premature contraction pattern could be exactly localized.« less
Mujawar, Sumaiyya; Utture, Sagar C; Fonseca, Eddie; Matarrita, Jessie; Banerjee, Kaushik
2014-05-01
A sensitive and rugged residue analysis method was validated for the estimation of dithiocarbamate fungicides in a variety of fruit and vegetable matrices. The sample preparation method involved reaction of dithiocarbamates with Tin(II) chloride in aqueous HCl. The CS2 produced was absorbed into an isooctane layer and estimated by GC-MS selected ion monitoring. Limit of quantification (LOQ) was ⩽40μgkg(-1) for grape, green chilli, tomato, potato, brinjal, pineapple and chayote and the recoveries were within 75-104% (RSD<15% at LOQ). The method could be satisfactorily applied for analysis of real world samples. Dissipation of mancozeb, the most-used dithiocarbamate fungicide, in field followed first+first order kinetics with pre-harvest intervals of 2 and 4days in brinjal, 7 and 10days in grapes and 0day in chilli at single and double dose of agricultural applications. Cooking practices were effective for removal of mancozeb residues from vegetables. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Lessons Learned from the Frenchman Flat Flow and Transport Modeling External Peer Review
NASA Astrophysics Data System (ADS)
Becker, N. M.; Crowe, B. M.; Ruskauff, G.; Kwicklis, E. M.; Wilborn, B.
2011-12-01
The objective of the U.S. Department of Energy's Underground Test Area Program program is to forecast, using computer modeling, the contaminant boundary of radionuclide transport in groundwater at the Nevada National Security Site that exceeds the Safe Drinking Water Act after 1000 yrs. This objective is defined within the Federal Facilities Agreement and Consent Order between the Department of Energy, Department of Defense and State of Nevada Division of Environmental Protection . At one of the Corrective Action Units, Frenchman Flat, a Phase I flow and transport model underwent peer review in 1999 to determine if the model approach, assumptions and results adequate to be used as a decision tool as a basis to negotiate a compliance boundary with Nevada Division of Environmental Protection. The external peer review decision was that the model was not fully tested under a full suite of possible conceptual models, including boundary conditions, flow mechanisms, other transport processes, hydrological framework models, sensitivity and uncertainty analysis, etc. The program went back to collect more data, expand modeling to consider other alternatives that were not adequately tested, and conduct sensitivity and uncertainty analysis. A second external peer review was held in August 2010. Their conclusion that the new Frenchman Flat flow and transport modeling analysis were adequate as a decision tool and that the model was ready to advance to the next step in the Federal Facilities Agreement and Consent Order strategy. We will discuss the processes to changing the modeling that occurred between the first and second peer reviews, and then present the second peer review general comments. Finally, we present the lessons learned from the total model acceptance process required for federal regulatory compliance.
Sensitivity analysis of non-cohesive sediment transport formulae
NASA Astrophysics Data System (ADS)
Pinto, Lígia; Fortunato, André B.; Freire, Paula
2006-10-01
Sand transport models are often based on semi-empirical equilibrium transport formulae that relate sediment fluxes to physical properties such as velocity, depth and characteristic sediment grain sizes. In engineering applications, errors in these physical properties affect the accuracy of the sediment fluxes. The present analysis quantifies error propagation from the input physical properties to the sediment fluxes, determines which ones control the final errors, and provides insight into the relative strengths, weaknesses and limitations of four total load formulae (Ackers and White, Engelund and Hansen, van Rijn, and Karim and Kennedy) and one bed load formulation (van Rijn). The various sources of uncertainty are first investigated individually, in order to pinpoint the key physical properties that control the errors. Since the strong non-linearity of most sand transport formulae precludes analytical approaches, a Monte Carlo method is validated and used in the analysis. Results show that the accuracy in total sediment transport evaluations is mainly determined by errors in the current velocity and in the sediment median grain size. For the bed load transport using the van Rijn formula, errors in the current velocity alone control the final accuracy. In a final set of tests, all physical properties are allowed to vary simultaneously in order to analyze the combined effect of errors. The combined effect of errors in all the physical properties is then compared to an estimate of the errors due to the intrinsic limitations of the formulae. Results show that errors in the physical properties can be dominant for typical uncertainties associated with these properties, particularly for small depths. A comparison between the various formulae reveals that the van Rijn formula is more sensitive to basic physical properties. Hence, it should only be used when physical properties are known with precision.
Passos, M H M; Lemos, M R; Almeida, S R; Balthazar, W F; da Silva, L; Huguenin, J A O
2017-01-10
In this work, we report on the analysis of speckle patterns produced by illuminating different rough surfaces with an optical vortex, a first-order (l=1) Laguerre-Gaussian beam. The generated speckle patterns were observed in the normal direction exploring four different planes: the diffraction plane, image plane, focal plane, and exact Fourier transform plane. The digital speckle patterns were analyzed using the Hurst exponent of digital images, an interesting tool used to study surface roughness. We show a proof of principle that the Hurst exponent of a digital speckle pattern is more sensitive with respect to the surface roughness when the speckle pattern is produced by an optical vortex and observed at a focal plane. We also show that Hurst exponents are not so sensitive with respect to the topological charge l. These results open news possibilities of investigation into speckle metrology once we have several techniques that use speckle patterns for different applications.
Lee, Adria D; Cassiday, Pamela K; Pawloski, Lucia C; Tatti, Kathleen M; Martin, Monte D; Briere, Elizabeth C; Tondella, M Lucia; Martin, Stacey W
2018-01-01
The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2-4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods-pertussis culture as the "gold standard," composite reference standard analysis (CRS), and latent class analysis (LCA). Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus
MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.
2014-01-01
In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170
A B-spline Galerkin method for the Dirac equation
NASA Astrophysics Data System (ADS)
Froese Fischer, Charlotte; Zatsarinny, Oleg
2009-06-01
The B-spline Galerkin method is first investigated for the simple eigenvalue problem, y=-λy, that can also be written as a pair of first-order equations y=λz, z=-λy. Expanding both y(r) and z(r) in the B basis results in many spurious solutions such as those observed for the Dirac equation. However, when y(r) is expanded in the B basis and z(r) in the dB/dr basis, solutions of the well-behaved second-order differential equation are obtained. From this analysis, we propose a stable method ( B,B) basis for the Dirac equation and evaluate its accuracy by comparing the computed and exact R-matrix for a wide range of nuclear charges Z and angular quantum numbers κ. When splines of the same order are used, many spurious solutions are found whereas none are found for splines of different order. Excellent agreement is obtained for the R-matrix and energies for bound states for low values of Z. For high Z, accuracy requires the use of a grid with many points near the nucleus. We demonstrate the accuracy of the bound-state wavefunctions by comparing integrals arising in hyperfine interaction matrix elements with exact analytic expressions. We also show that the Thomas-Reiche-Kuhn sum rule is not a good measure of the quality of the solutions obtained by the B-spline Galerkin method whereas the R-matrix is very sensitive to the appearance of pseudo-states.
NASA Technical Reports Server (NTRS)
Nelson, C. C.; Nguyen, D. T.
1987-01-01
A new analysis procedure has been presented which solves for the flow variables of an annular pressure seal in which the rotor has a large static displacement (eccentricity) from the centered position. The present paper incorporates the solutions to investigate the effect of eccentricity on the rotordynamic coefficients. The analysis begins with a set of governing equations based on a turbulent bulk-flow model and Moody's friction factor equation. Perturbations of the flow variables yields a set of zeroth- and first-order equations. After integration of the zeroth-order equations, the resulting zeroth-order flow variables are used as input in the solution of the first-order equations. Further integration of the first order pressures yields the eccentric rotordynamic coefficients. The results from this procedure compare well with available experimental and theoretical data, with accuracy just as good or slightly better than the predictions based on a finite-element model.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Charrier, Pierre; Mansour, Nagi N. (Technical Monitor)
2001-01-01
We consider the discontinuous Galerkin (DG) finite element discretization of first order systems of conservation laws derivable as moments of the kinetic Boltzmann equation. This includes well known conservation law systems such as the Euler For the class of first order nonlinear conservation laws equipped with an entropy extension, an energy analysis of the DG method for the Cauchy initial value problem is developed. Using this DG energy analysis, several new variants of existing numerical flux functions are derived and shown to be energy stable.
NASA Astrophysics Data System (ADS)
Lei, Zeyu; Zhou, Xin; Yang, Jie; He, Xiaolong; Wang, Yalin; Yang, Tian
2017-04-01
Integrating surface plasmon resonance (SPR) devices upon single-mode fiber (SMF) end facets renders label-free biosensing systems that have a dip-and-read configuration, high compatibility with fiber-optic techniques, and in vivo monitoring capability, which however meets the challenge to match the performance of free-space counterparts. We report a second-order distributed feedback (DFB) SPR cavity on an SMF end facet and its application in protein interaction analysis. In our device, a periodic array of nanoslits in a gold film is used to couple fiber guided lightwaves to surface plasmon polaritons (SPPs) with its first order spatial Fourier component, while the second order spatial Fourier component provides DFB to SPP propagation and produces an SPP bandgap. A phase shift section in the DFB structure introduces an SPR defect state within the SPP bandgap, whose mode profile is optimized to match that of the SMF to achieve a reasonable coupling efficiency. We report an experimental refractive index sensitivity of 628 nm RIU-1, a figure-of-merit of 80 RIU-1, and a limit of detection of 7 × 10-6 RIU. The measurement of the real-time interaction between human immunoglobulin G molecules and their antibodies is demonstrated.
A new order splitting model with stochastic lead times for deterioration items
NASA Astrophysics Data System (ADS)
Sazvar, Zeinab; Akbari Jokar, Mohammad Reza; Baboli, Armand
2014-09-01
In unreliable supply environments, the strategy of pooling lead time risks by splitting replenishment orders among multiple suppliers simultaneously is an attractive sourcing policy that has captured the attention of academic researchers and corporate managers alike. While various assumptions are considered in the models developed, researchers tend to overlook an important inventory category in order splitting models: deteriorating items. In this paper, we study an order splitting policy for a retailer that sells a deteriorating product. The inventory system is modelled as a continuous review system (s, Q) under stochastic lead time. Demand rate per unit time is assumed to be constant over an infinite planning horizon and shortages are backordered completely. We develop two inventory models. In the first model, it is assumed that all the requirements are supplied by only one source, whereas in the second, two suppliers are available. We use sensitivity analysis to determine the situations in which each sourcing policy is the most economic. We then study a real case from the European pharmaceutical industry to demonstrate the applicability and effectiveness of the proposed models. Finally, more promising directions are suggested for future research.
Christensen, Daniel; Zubrick, Stephen R; Lawrence, David; Mitrou, Francis; Taylor, Catherine L
2014-01-01
Receptive vocabulary development is a component of the human language system that emerges in the first year of life and is characterised by onward expansion throughout life. Beginning in infancy, children's receptive vocabulary knowledge builds the foundation for oral language and reading skills. The foundations for success at school are built early, hence the public health policy focus on reducing developmental inequalities before children start formal school. The underlying assumption is that children's development is stable, and therefore predictable, over time. This study investigated this assumption in relation to children's receptive vocabulary ability. We investigated the extent to which low receptive vocabulary ability at 4 years was associated with low receptive vocabulary ability at 8 years, and the predictive utility of a multivariate model that included child, maternal and family risk factors measured at 4 years. The study sample comprised 3,847 children from the first nationally representative Longitudinal Study of Australian Children (LSAC). Multivariate logistic regression was used to investigate risks for low receptive vocabulary ability from 4-8 years and sensitivity-specificity analysis was used to examine the predictive utility of the multivariate model. In the multivariate model, substantial risk factors for receptive vocabulary delay from 4-8 years, in order of descending magnitude, were low receptive vocabulary ability at 4 years, low maternal education, and low school readiness. Moderate risk factors, in order of descending magnitude, were low maternal parenting consistency, socio-economic area disadvantage, low temperamental persistence, and NESB status. The following risk factors were not significant: One or more siblings, low family income, not reading to the child, high maternal work hours, and Aboriginal or Torres Strait Islander ethnicity. The results of the sensitivity-specificity analysis showed that a well-fitted multivariate model featuring risks of substantive magnitude does not do particularly well in predicting low receptive vocabulary ability from 4-8 years.
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Contessi, L.; Lovato, A.; Pederiva, F.
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Ground-state properties of 4He and 16O extrapolated from lattice QCD with pionless EFT
Contessi, L.; Lovato, A.; Pederiva, F.; ...
2017-07-26
Here, we extend the prediction range of Pionless Effective Field Theory with an analysis of the ground state of 16O in leading order. To renormalize the theory, we use as input both experimental data and lattice QCD predictions of nuclear observables, which probe the sensitivity of nuclei to increased quark masses. The nuclear many-body Schrödinger equation is solved with the Auxiliary Field Diffusion Monte Carlo method. For the first time in a nuclear quantum Monte Carlo calculation, a linear optimization procedure, which allows us to devise an accurate trial wave function with a large number of variational parameters, is adopted.more » The method yields a binding energy of 4He which is in good agreement with experiment at physical pion mass and with lattice calculations at larger pion masses. At leading order we do not find any evidence of a 16O state which is stable against breakup into four 4He, although higher-order terms could bind 16O.« less
Zheng, Juan; Liang, Yeru; Liu, Shuqin; Jiang, Ruifen; Zhu, Fang; Wu, Dingcai; Ouyang, Gangfeng
2016-01-04
A combination of nitrogen-doped ordered mesoporous polymer (NOMP) and stainless steel wires led to highly sensitive, selective, and stable solid phase microextraction (SPME) fibers by in situ polymerization for the first time. The ordered structure of synthesized NOMP coating was illustrated by transmission electron microscopy (TEM) and X-ray diffraction (XRD), and microscopy analysis by scanning electron microscopy (SEM) confirmed a homogenous morphology of the NOMP-coated fiber. The NOMP-coated fiber was further applied for the extraction of organochlorine pesticides (OCPs) with direct-immersion solid-phase microextraction (DI-SPME) method followed by gas chromatography-mass spectrometry (GC-MS) quantification. Under the optimized conditions, low detection limits (0.023-0.77 ng L(-1)), a wide linear range (9-1500 ng L(-1)), good repeatability (3.5-8.1%, n=6) and excellent reproducibility (1.5-8.3%, n=3) were achieved. Moreover, the practical feasibility of the proposed method was evaluated by determining OCPs in environmental water samples with satisfactory recoveries. Copyright © 2015 Elsevier B.V. All rights reserved.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
Study on the method of maintaining bathtub water temperature
NASA Astrophysics Data System (ADS)
Wang, Xiaoyan
2017-05-01
In order to make the water temperature constant and the spillage to its minimum, we use finite element method and grid transformation and have established an optimized model for people in the bathtub both in time and space, which is based on theories of heat convection and heat conduction and three-dimensional second-order equation. For the first question, we have worked out partial differential equations for three-dimensional heat convection. In the meantime, we also create an optimized temperature model in time and space by using initial conditions and boundary conditions. For the second question we have simulated the shape and volume of the tub and the human gestures in the tub based on the first question. As for the shape and volume of the tub, we draw conclusion that the tub whose surface area is little contains water with higher temperature. Thus, when we are designing bathtubs we can decrease the area so that we'll have less loss heat. For different gestures when people are bathing, we have found that gestures have no obvious influence on variations of water temperature. Finally, we did some simulating calculations, and did some analysis on precision and sensitivity
NASA Technical Reports Server (NTRS)
Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.
2000-01-01
First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.
Strous, Rael D; Bar, Faina; Keret, Noa; Lapidus, Raya; Kosov, Nikolai; Chelben, Joseph; Kotler, Moshe
2006-01-01
Investigation of the clinical presentation and treatment of first-episode psychosis is important in order to exclude effects of age, chronic illness, long-term treatment and institutionalization. The aim of this descriptive study was to investigate the management practices of first-episode schizophrenia in a cohort of patients in Israel and to document use of the various "typical" or "atypical" antipsychotic agents. Fifty-one consecutive patients (26 M, 25 F) with first-episode psychosis were recruited for study participation and were administered either typical or atypical antipsychotic medications in a naturalistic manner. While an approximately equal number of subjects received typical and atypical medications at illness onset, a prominent shift to atypical antipsychotic treatment occurred over the study course; 18 subjects had medication class shifts: 17 from typical to atypical, and one from atypical to typical. Negative symptoms did not affect length of hospitalization, but were associated with aggression. Higher depression rates were noted in patients with long hospitalizations who received typical antipsychotic medications. Immigrants were admitted at an age approximately four years older than native-born Israelis. The prominent shift from "typical" to "atypical" antipsychotic medications may indicate sensitivity of first-episode psychotic patients to side-effects of "typical" medications and prominence of use of atypical medications in this patient subpopulation be it due to improved efficacy over time or successful marketing. Unique cultural and population characteristics may contribute to the manifestation of first-episode psychosis and suggest the importance of more effective outreach to the immigrant population in order to manage an apparent treatment delay.
Probing CP violation in $$h\\rightarrow\\gamma\\gamma$$ with converted photons
Bishara, Fady; Grossman, Yuval; Harnik, Roni; ...
2014-04-11
We study Higgs diphoton decays, in which both photons undergo nuclear conversion to electron- positron pairs. The kinematic distribution of the two electron-positron pairs may be used to probe the CP violating (CPV) coupling of the Higgs to photons, that may be produced by new physics. Detecting CPV in this manner requires interference between the spin-polarized helicity amplitudes for both conversions. We derive leading order, analytic forms for these amplitudes. In turn, we obtain compact, leading-order expressions for the full process rate. While performing experiments involving photon conversions may be challenging, we use the results of our analysis to constructmore » experimental cuts on certain observables that may enhance sensitivity to CPV. We show that there exist regions of phase space on which sensitivity to CPV is of order unity. As a result, the statistical sensitivity of these cuts are verified numerically, using dedicated Monte-Carlo simulations.« less
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Flexible nanopillar-based electrochemical sensors for genetic detection of foodborne pathogens
NASA Astrophysics Data System (ADS)
Park, Yoo Min; Lim, Sun Young; Jeong, Soon Woo; Song, Younseong; Bae, Nam Ho; Hong, Seok Bok; Choi, Bong Gill; Lee, Seok Jae; Lee, Kyoung G.
2018-06-01
Flexible and highly ordered nanopillar arrayed electrodes have brought great interest for many electrochemical applications, especially to the biosensors, because of its unique mechanical and topological properties. Herein, we report an advanced method to fabricate highly ordered nanopillar electrodes produced by soft-/photo-lithography and metal evaporation. The highly ordered nanopillar array exhibited the superior electrochemical and mechanical properties in regard with the wide space to response with electrolytes, enabling the sensitive analysis. As-prepared gold and silver electrodes on nanopillar arrays exhibit great and stable electrochemical performance to detect the amplified gene from foodborne pathogen of Escherichia coli O157:H7. Additionally, lightweight, flexible, and USB-connectable nanopillar-based electrochemical sensor platform improves the connectivity, portability, and sensitivity. Moreover, we successfully confirm the performance of genetic analysis using real food, specially designed intercalator, and amplified gene from foodborne pathogens with high reproducibility (6% standard deviation) and sensitivity (10 × 1.01 CFU) within 25 s based on the square wave voltammetry principle. This study confirmed excellent mechanical and chemical characteristics of nanopillar electrodes have a great and considerable electrochemical activity to apply as genetic biosensor platform in the fields of point-of-care testing (POCT).
SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dormody, M.; Johnson, R. P.; Atwood, W. B.
2011-12-01
We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less
Ultrasensitive microfluidic solid-phase ELISA using an actuatable microwell-patterned PDMS chip.
Wang, Tanyu; Zhang, Mohan; Dreher, Dakota D; Zeng, Yong
2013-11-07
Quantitative detection of low abundance proteins is of significant interest for biological and clinical applications. Here we report an integrated microfluidic solid-phase ELISA platform for rapid and ultrasensitive detection of proteins with a wide dynamic range. Compared to the existing microfluidic devices that perform affinity capture and enzyme-based optical detection in a constant channel volume, the key novelty of our design is two-fold. First, our system integrates a microwell-patterned assay chamber that can be pneumatically actuated to significantly reduce the volume of chemifluorescent reaction, markedly improving the sensitivity and speed of ELISA. Second, monolithic integration of on-chip pumps and the actuatable assay chamber allow programmable fluid delivery and effective mixing for rapid and sensitive immunoassays. Ultrasensitive microfluidic ELISA was demonstrated for insulin-like growth factor 1 receptor (IGF-1R) across at least five orders of magnitude with an extremely low detection limit of 21.8 aM. The microwell-based solid-phase ELISA strategy provides an expandable platform for developing the next-generation microfluidic immunoassay systems that integrate and automate digital and analog measurements to further improve the sensitivity, dynamic ranges, and reproducibility of proteomic analysis.
Meta-analysis of the relative sensitivity of semi-natural vegetation species to ozone.
Hayes, F; Jones, M L M; Mills, G; Ashmore, M
2007-04-01
This study identified 83 species from existing publications suitable for inclusion in a database of sensitivity of species to ozone (OZOVEG database). An index, the relative sensitivity to ozone, was calculated for each species based on changes in biomass in order to test for species traits associated with ozone sensitivity. Meta-analysis of the ozone sensitivity data showed a wide inter-specific range in response to ozone. Some relationships in comparison to plant physiological and ecological characteristics were identified. Plants of the therophyte lifeform were particularly sensitive to ozone. Species with higher mature leaf N concentration were more sensitive to ozone than those with lower leaf N concentration. Some relationships between relative sensitivity to ozone and Ellenberg habitat requirements were also identified. In contrast, no relationships between relative sensitivity to ozone and mature leaf P concentration, Grime's CSR strategy, leaf longevity, flowering season, stomatal density and maximum altitude were found. The relative sensitivity of species and relationships with plant characteristics identified in this study could be used to predict sensitivity to ozone of untested species and communities.
Validation of a multi-criteria evaluation model for animal welfare.
Martín, P; Czycholl, I; Buxadé, C; Krieter, J
2017-04-01
The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.
Wang, Heye; Dou, Peng; Lü, Chenchen; Liu, Zhen
2012-07-13
Erythropoietin (EPO) is an important glycoprotein hormone. Recombinant human EPO (rhEPO) is an important therapeutic drug and can be also used as doping reagent in sports. The analysis of EPO glycoforms in pharmaceutical and sports areas greatly challenges analytical scientists from several aspects, among which sensitive detection and effective and facile sample preparation are two essential issues. Herein, we investigated new possibilities for these two aspects. Deep UV laser-induced fluorescence detection (deep UV-LIF) was established to detect the intrinsic fluorescence of EPO while an immuno-magnetic beads-based extraction (IMBE) was developed to specifically extract EPO glycoforms. Combined with capillary zone electrophoresis (CZE), CZE-deep UV-LIF allows high resolution glycoform profiling with improved sensitivity. The detection sensitivity was improved by one order of magnitude as compared with UV absorbance detection. An additional advantage is that the original glycoform distribution can be completely preserved because no fluorescent labeling is needed. By combining IMBE with CZE-deep UV-LIF, the overall detection sensitivity was 1.5 × 10⁻⁸ mol/L, which was enhanced by two orders of magnitude relative to conventional CZE with UV absorbance detection. It is applicable to the analysis of pharmaceutical preparations of EPO, but the sensitivity is insufficient for the anti-doping analysis of EPO in blood and urine. IMBE can be straightforward and effective approach for sample preparation. However, antibodies with high specificity were the key for application to urine samples because some urinary proteins can severely interfere the immuno-extraction. Copyright © 2012 Elsevier B.V. All rights reserved.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less
Digital Inverter Amine Sensing via Synergistic Responses by n and p Organic Semiconductors.
Tremblay, Noah J; Jung, Byung Jun; Breysse, Patrick; Katz, Howard E
2011-11-22
Chemiresistors and sensitive OFETs have been substantially developed as cheap, scalable, and versatile sensing platforms. While new materials are expanding OFET sensing capabilities, the device architectures have changed little. Here we report higher order logic circuits utilizing OFETs sensitive to amine vapors. The circuits depend on the synergistic responses of paired p- and n-channel organic semiconductors, including an unprecedented analyte-induced current increase by the n-channel semiconductor. This represents the first step towards 'intelligent sensors' that utilize analog signal changes in sensitive OFETs to produce direct digital readouts suitable for further logic operations.
Digital Inverter Amine Sensing via Synergistic Responses by n and p Organic Semiconductors
Tremblay, Noah J.; Jung, Byung Jun; Breysse, Patrick; Katz, Howard E.
2013-01-01
Chemiresistors and sensitive OFETs have been substantially developed as cheap, scalable, and versatile sensing platforms. While new materials are expanding OFET sensing capabilities, the device architectures have changed little. Here we report higher order logic circuits utilizing OFETs sensitive to amine vapors. The circuits depend on the synergistic responses of paired p- and n-channel organic semiconductors, including an unprecedented analyte-induced current increase by the n-channel semiconductor. This represents the first step towards ‘intelligent sensors’ that utilize analog signal changes in sensitive OFETs to produce direct digital readouts suitable for further logic operations. PMID:23754969
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
Lim, JaeHyoung; Oh, In Kyung; Han, Changsu; Huh, Yu Jeong; Jung, In-Kwa; Patkar, Ashwin A; Steffens, David C; Jang, Bo-Hyoung
2013-09-01
We performed a meta-analysis in order to determine which neuropsychological domains and tasks would be most sensitive for discriminating between patients with major depressive disorder (MDD) and healthy controls. Relevant articles were identified through a literature search of the PubMed and Cochrane Library databases for the period between January 1997 and May 2011. A meta-analysis was conducted using the standardized means of individual cognitive tests in each domain. The heterogeneity was assessed, and subgroup analyses according to age and medication status were performed to explore the sources of heterogeneity. A total of 22 trials involving 955 MDD patients and 7,664 healthy participants were selected for our meta-analysis. MDD patients showed significantly impaired results compared with healthy participants on the Digit Span and Continuous Performance Test in the attention domain; the Trail Making Test A (TMT-A) and the Digit Symbol Test in the processing speed domain; the Stroop Test, the Wisconsin Card Sorting Test, and Verbal Fluency in the executive function domain; and immediate verbal memory in the memory domain. The Finger Tapping Task, TMT-B, delayed verbal memory, and immediate and delayed visual memory failed to separate MDD patients from healthy controls. The results of subgroup analysis showed that performance of Verbal Fluency was significantly impaired in younger depressed patients (<60 years), and immediate visual memory was significantly reduced in depressed patients using antidepressants. Our findings have inevitable limitations arising from methodological issues inherent in the meta-analysis and we could not explain high heterogeneity between studies. Despite such limitations, current study has the strength of being the first meta-analysis which tried to specify cognitive function of depressed patients compared with healthy participants. And our findings may provide clinicians with further evidences that some cognitive tests in specific cognitive domains have sensitivity to discriminate MDD patients from healthy controls.
Spectral sensitivity characteristics simulation for silicon p-i-n photodiode
NASA Astrophysics Data System (ADS)
Urchuk, S. U.; Legotin, S. A.; Osipov, U. V.; Elnikov, D. S.; Didenko, S. I.; Astahov, V. P.; Rabinovich, O. I.; Yaromskiy, V. P.; Kuzmina, K. A.
2015-11-01
In this paper the simulation results of the spectral sensitivity characteristics of silicon p-i-n-photodiodes are presented. The analysis of the characteristics of the semiconductor material (the doping level, lifetime, surface recombination velocity), the construction and operation modes on the characteristics of photosensitive structures in order to optimize them was carried out.
Meta-Analysis of the Relations of Anxiety Sensitivity to the Depressive and Anxiety Disorders
ERIC Educational Resources Information Center
Naragon-Gainey, Kristin
2010-01-01
There is a substantial literature relating the personality trait "anxiety sensitivity" (AS; tendency to fear anxiety-related sensations) and its lower order dimensions to the mood and anxiety (i.e., internalizing) disorders. However, particularly given the disorders' high comorbidity rates, it remains unclear whether AS is broadly related to these…
A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils
Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou
2014-01-01
We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications. PMID:24824372
A high-sensitivity current sensor utilizing CrNi wire and microfiber coils.
Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou
2014-05-12
We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.
Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J
2009-01-01
Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.
Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.
Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J
2014-01-01
A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M.
2017-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft® Excel® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet. PMID:28163564
Verginelli, Iason; Yao, Yijun; Suuberg, Eric M
2016-01-01
In this study we present a petroleum vapor intrusion tool implemented in Microsoft ® Excel ® using Visual Basic for Applications (VBA) and integrated within a graphical interface. The latter helps users easily visualize two-dimensional soil gas concentration profiles and indoor concentrations as a function of site-specific conditions such as source strength and depth, biodegradation reaction rate constant, soil characteristics and building features. This tool is based on a two-dimensional explicit analytical model that combines steady-state diffusion-dominated vapor transport in a homogeneous soil with a piecewise first-order aerobic biodegradation model, in which rate is limited by oxygen availability. As recommended in the recently released United States Environmental Protection Agency's final Petroleum Vapor Intrusion guidance, a sensitivity analysis and a simplified Monte Carlo uncertainty analysis are also included in the spreadsheet.
Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test
NASA Astrophysics Data System (ADS)
Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
Plastic lab-on-a-chip for fluorescence excitation with integrated organic semiconductor lasers.
Vannahme, Christoph; Klinkhammer, Sönke; Lemmer, Uli; Mappes, Timo
2011-04-25
Laser light excitation of fluorescent markers offers highly sensitive and specific analysis for bio-medical or chemical analysis. To profit from these advantages for applications in the field or at the point-of-care, a plastic lab-on-a-chip with integrated organic semiconductor lasers is presented here. First order distributed feedback lasers based on the organic semiconductor tris(8-hydroxyquinoline) aluminum (Alq3) doped with the laser dye 4-dicyanomethylene-2-methyl-6-(p-dimethylaminostyril)-4H-pyrane (DCM), deep ultraviolet induced waveguides, and a nanostructured microfluidic channel are integrated into a poly(methyl methacrylate) (PMMA) substrate. A simple and parallel fabrication process is used comprising thermal imprint, DUV exposure, evaporation of the laser material, and sealing by thermal bonding. The excitation of two fluorescent marker model systems including labeled antibodies with light emitted by integrated lasers is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less
Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.
2008-01-01
Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308
Investigation of Grating-Assisted Trimodal Interferometer Biosensors Based on a Polymer Platform.
Liang, Yuxin; Zhao, Mingshan; Wu, Zhenlin; Morthier, Geert
2018-05-10
A grating-assisted trimodal interferometer biosensor is proposed and numerically analyzed. A long period grating coupler, for adjusting the power between the fundamental mode and the second higher order mode, is investigated, and is shown to act as a conventional directional coupler for adjusting the power between the two arms. The trimodal interferometer can achieve maximal fringe visibility when the powers of the two modes are adjusted to the same value by the grating coupler, which means that a better limit of detection can be expected. In addition, the second higher order mode typically has a larger evanescent tail than the first higher order mode in bimodal interferometers, resulting in a higher sensitivity of the trimodal interferometer. The influence of fabrication tolerances on the performance of the designed interferometer is also investigated. The power difference between the two modes shows inertia to the fill factor of the grating, but high sensitivity to the modulation depth. Finally, a 2050 2π/RIU (refractive index unit) sensitivity and 43 dB extinction ratio of the output power are achieved.
Real-time fringe pattern demodulation with a second-order digital phase-locked loop.
Gdeisat, M A; Burton, D R; Lalor, M J
2000-10-10
The use of a second-order digital phase-locked loop (DPLL) to demodulate fringe patterns is presented. The second-order DPLL has better tracking ability and more noise immunity than the first-order loop. Consequently, the second-order DPLL is capable of demodulating a wider range of fringe patterns than the first-order DPLL. A basic analysis of the first- and the second-order loops is given, and a performance comparison between the first- and the second-order DPLL's in analyzing fringe patterns is presented. The implementation of the second-order loop in real time on a commercial parallel image processing system is described. Fringe patterns are grabbed and processed, and the resultant phase maps are displayed concurrently.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
77 FR 66841 - The Sherwin-Williams Company; Analysis of Proposed Consent Order To Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... include any sensitive personal information, like anyone's Social Security number, date of birth, driver's... make final the agreement's proposed order. This matter involves Sherwin-Williams's marketing and sale... and practices in the future. Part I addresses the marketing of zero VOC paints. It prohibits Sherwin...
78 FR 46950 - Ecobaby Organics, Inc.; Analysis of Proposed Consent Order To Aid Public Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-02
... any sensitive personal information, like anyone's Social Security number, date of birth, driver's... or make final the agreement's proposed order. This matter involves respondent's marketing and sale of... respondent from engaging in similar acts and practices in the future. Part I addresses the marketing of VOC...
Application of ultrasonic signature analysis for fatigue detection in complex structures
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.
1974-01-01
Ultrasonic signature analysis shows promise of being a singularly well-suited method for detecting fatigue in structures as complex as aircraft. The method employs instrumentation centered about a Fourier analyzer system, which features analog-to-digital conversion, digital data processing, and digital display of cross-correlation functions and cross-spectra. These features are essential to the analysis of ultrasonic signatures according to the procedure described here. In order to establish the feasibility of the method, the initial experiments were confined to simple plates with simulated and fatigue-induced defects respectively. In the first test the signature proved sensitive to the size of a small hole drilled into the plate. In the second test, performed on a series of fatigue-loaded plates, the signature proved capable of indicating both the initial appearance and subsequent growth of a fatigue crack. In view of these encouraging results it is concluded that the method has reached a sufficiently advanced stage of development to warrant application to small-scale structures or even actual aircraft.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less
Methodology for environmental assessments of oil and hazardous substance spills
NASA Astrophysics Data System (ADS)
Davis, W. P.; Scott, G. I.; Getter, C. D.; Hayes, M. O.; Gundlach, E. R.
1980-03-01
Scientific assessment of the complex environmental consequences of large spills of oil or other hazardous substances has stimulated development of improved strategies for rapid and valid collection and processing of ecological data. The combination of coastal processes and geological measurements developed by Hayes & Gundlach (1978), together with selected field biological and chemical observations/measurements, provide an ecosystem impact assessment approach which is termed “integrated zonal method of ecological impact assessment.” Ecological assessment of oil and hazardous material spills has been divided into three distinct phases: (1) first-order response studies — conducted at the time of the initial spill event, which gather data to document acute impacts and assist decision-makers in prioritization of cleanup efforts and protection of ecologically sensitive habitats, (2) second-order response studies — conducted two months to one year post-spill, which document any delayed mortality and attempt to identify potential sublethal impacts in sensitive species, and (3) third-order response studies — conducted one to three years post-spill, to document chronic impacts (both lethal and sublethal) to specific indicator species. Data collected during first-order response studies are gathered in a quantitative manner so that the initial assessment may become a baseline for later, more detailed, post-spill scientific efforts. First- and second-order response studies of the “Peck Slip” oil spill in Puerto Rico illustrate the usefulness of this method. The need for contingency planning before a spill has been discussed along with the use of the Vulnerability Index, a method in which coastal environments are classified on a scale of 1 10, based upon their potential susceptibility to oiling. A study of the lower Cook Inlet section of the Alaskan coast illustrates the practical application of this method.
Mean-field analysis of an inductive reasoning game: Application to influenza vaccination
NASA Astrophysics Data System (ADS)
Breban, Romulus; Vardavas, Raffaele; Blower, Sally
2007-09-01
Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.
Mean-field analysis of an inductive reasoning game: application to influenza vaccination.
Breban, Romulus; Vardavas, Raffaele; Blower, Sally
2007-09-01
Recently we have introduced an inductive reasoning game of voluntary yearly vaccination to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. Here, we analyze our model to describe the dynamics of the collective yearly vaccination uptake. We discuss the mean-field equations of our model and first order effects of fluctuations. We explain why our model predicts that severe epidemics are periodically expected even without the introduction of pandemic strains. We find that fluctuations in the collective yearly vaccination uptake induce severe epidemics with an expected periodicity that depends on the number of independent decision makers in the population. The mean-field dynamics also reveal that there are conditions for which the dynamics become robust to the fluctuations. However, the transition between fluctuation-sensitive and fluctuation-robust dynamics occurs for biologically implausible parameters. We also analyze our model when incentive-based vaccination programs are offered. When a family-based incentive is offered, the expected periodicity of severe epidemics is increased. This results from the fact that the number of independent decision makers is reduced, increasing the effect of the fluctuations. However, incentives based on the number of years of prepayment of vaccination may yield fluctuation-robust dynamics where severe epidemics are prevented. In this case, depending on prepayment, the transition between fluctuation-sensitive and fluctuation-robust dynamics may occur for biologically plausible parameters. Our analysis provides a practical method for identifying how many years of free vaccination should be provided in order to successfully ameliorate influenza epidemics.
Textural content in 3T MR: an image-based marker for Alzheimer's disease
NASA Astrophysics Data System (ADS)
Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday
2005-04-01
In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.
Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Bielak, J.
2008-12-01
I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Markel, D; Hegyi, G
2016-06-15
Purpose: The reliability of computed tomography (CT) textures is an important element of radiomics analysis. This study investigates the dependency of lung CT textures on different breathing phases and changes in CT image acquisition protocols in a realistic phantom setting. Methods: We investigated 11 CT texture features for radiation-induced lung disease from 3 categories (first-order, grey level co-ocurrence matrix (GLCM), and Law’s filter). A biomechanical swine lung phantom was scanned at two breathing phases (inhale/exhale) and two scanning protocols set for PET/CT and diagnostic CT scanning. Lung volumes acquired from the CT images were divided into 2-dimensional sub-regions with amore » grid spacing of 31 mm. The distribution of the evaluated texture features from these sub-regions were compared between the two scanning protocols and two breathing phases. The significance of each factor on feature values were tested at 95% significance level using analysis of covariance (ANCOVA) model with interaction terms included. Robustness of a feature to a scanning factor was defined as non-significant dependence on the factor. Results: Three GLCM textures (variance, sum entropy, difference entropy) were robust to breathing changes. Two GLCM (variance, sum entropy) and 3 Law’s filter textures (S5L5, E5L5, W5L5) were robust to scanner changes. Moreover, the two GLCM textures (variance, sum entropy) were consistent across all 4 scanning conditions. First-order features, especially Hounsfield unit intensity features, presented the most drastic variation up to 39%. Conclusion: Amongst the studied features, GLCM and Law’s filter texture features were more robust than first-order features. However, the majority of the features were modified by either breathing phase or scanner changes, suggesting a need for calibration when retrospectively comparing scans obtained at different conditions. Further investigation is necessary to identify the sensitivity of individual image acquisition parameters.« less
Bioethics in Denmark. Moving from first- to second-order analysis?
Nielsen, Morten Ebbe Juul; Andersen, Martin Marchman
2014-07-01
This article examines two current debates in Denmark--assisted suicide and the prioritization of health resources--and proposes that such controversial bioethical issues call for distinct philosophical analyses: first-order examinations, or an applied philosophy approach, and second-order examinations, what might be called a political philosophical approach. The authors argue that although first-order examination plays an important role in teasing out different moral points of view, in contemporary democratic societies, few, if any, bioethical questions can be resolved satisfactorily by means of first-order analyses alone, and that bioethics needs to engage more closely with second-order enquiries and the question of legitimacy in general.
First-Order Parametric Model of Reflectance Spectra for Dyed Fabrics
2016-02-19
Unclassified Unlimited 31 Daniel Aiken (202) 279-5293 Parametric modeling Inverse /direct analysis This report describes a first-order parametric model of...Appendix: Dielectric Response Functions for Dyes Obtained by Inverse Analysis ……………………………...…………………………………………………….19 1 First-Order Parametric...which provides for both their inverse and direct modeling1. The dyes considered contain spectral features that are of interest to the U.S. Navy for
de Matos, Liana Wermelinger; Carey, Robert J; Carrera, Marinete Pinheiro
2010-09-01
Repeated treatments with psychostimulant drugs generate behavioral sensitization. In the present study we employed a paired/unpaired protocol to assess the effects of repeated apomorphine (2.0 mg/kg) treatments upon locomotion behavior. In the first experiment we assessed the effects of conditioning upon apomorphine sensitization. Neither the extinction of the conditioned response nor a counter-conditioning procedure in which we paired an inhibitory treatment (apomorphine 0.05 mg/kg) with the previously established conditioned stimulus modified the sensitization response. In the second experiment, we administered the paired/unpaired protocol in two phases. In the second phase, we reversed the paired/unpaired protocol. Following the first phase, the paired group alone exhibited conditioned locomotion in the vehicle test and a sensitization response. In the second phase, the initial unpaired group which received 5 paired apomorphine trials during the reversal phase did not develop a conditioned response but developed a potentiated sensitization response. This disassociation of the conditioned response from the sensitization response is attributed to an apomorphine anti-habituation effect that can generate a false positive Pavlovian conditioned response effect. The potentiated sensitization response induced by the treatment reversal protocol points to an important role for the sequential experience of the paired/unpaired protocol in behavioral sensitization. 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D’Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O’Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O’Reilly, B.; O’Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration
2018-03-01
The first observing run of Advanced LIGO spanned 4 months, from 12 September 2015 to 19 January 2016, during which gravitational waves were directly detected from two binary black hole systems, namely GW150914 and GW151226. Confident detection of gravitational waves requires an understanding of instrumental transients and artifacts that can reduce the sensitivity of a search. Studies of the quality of the detector data yield insights into the cause of instrumental artifacts and data quality vetoes specific to a search are produced to mitigate the effects of problematic data. In this paper, the systematic removal of noisy data from analysis time is shown to improve the sensitivity of searches for compact binary coalescences. The output of the PyCBC pipeline, which is a python-based code package used to search for gravitational wave signals from compact binary coalescences, is used as a metric for improvement. GW150914 was a loud enough signal that removing noisy data did not improve its significance. However, the removal of data with excess noise decreased the false alarm rate of GW151226 by more than two orders of magnitude, from 1 in 770 yr to less than 1 in 186 000 yr.
Eltiti, Stacy; Wallace, Denise; Ridgewell, Anna; Zougkou, Konstantina; Russo, Riccardo; Sepulveda, Francisco; Mirshekar-Syahkal, Dariush; Rasor, Paul; Deeble, Roger; Fox, Elaine
2007-11-01
Individuals with idiopathic environmental illness with attribution to electromagnetic fields (IEI-EMF) believe they suffer negative health effects when exposed to electromagnetic fields from everyday objects such as mobile phone base stations. This study used both open provocation and double-blind tests to determine if sensitive and control individuals experience more negative health effects when exposed to base station-like signals compared with sham. Fifty-six self-reported sensitive and 120 control participants were tested in an open provocation test. Of these, 12 sensitive and 6 controls withdrew after the first session. The remainder completed a series of double-blind tests. Subjective measures of well-being and symptoms as well as physiological measures of blood volume pulse, heart rate, and skin conductance were obtained. During the open provocation, sensitive individuals reported lower levels of well-being in both the global system for mobile communication (GSM) and universal mobile telecommunications system (UMTS) compared with sham exposure, whereas controls reported more symptoms during the UMTS exposure. During double-blind tests the GSM signal did not have any effect on either group. Sensitive participants did report elevated levels of arousal during the UMTS condition, whereas the number or severity of symptoms experienced did not increase. Physiological measures did not differ across the three exposure conditions for either group. Short-term exposure to a typical GSM base station-like signal did not affect well-being or physiological functions in sensitive or control individuals. Sensitive individuals reported elevated levels of arousal when exposed to a UMTS signal. Further analysis, however, indicated that this difference was likely to be due to the effect of order of exposure rather than the exposure itself.
Eltiti, Stacy; Wallace, Denise; Ridgewell, Anna; Zougkou, Konstantina; Russo, Riccardo; Sepulveda, Francisco; Mirshekar-Syahkal, Dariush; Rasor, Paul; Deeble, Roger; Fox, Elaine
2007-01-01
Background Individuals with idiopathic environmental illness with attribution to electromagnetic fields (IEI-EMF) believe they suffer negative health effects when exposed to electromagnetic fields from everyday objects such as mobile phone base stations. Objectives This study used both open provocation and double-blind tests to determine if sensitive and control individuals experience more negative health effects when exposed to base station-like signals compared with sham. Methods Fifty-six self-reported sensitive and 120 control participants were tested in an open provocation test. Of these, 12 sensitive and 6 controls withdrew after the first session. The remainder completed a series of double-blind tests. Subjective measures of well-being and symptoms as well as physiological measures of blood volume pulse, heart rate, and skin conductance were obtained. Results During the open provocation, sensitive individuals reported lower levels of well-being in both the global system for mobile communication (GSM) and universal mobile telecommunications system (UMTS) compared with sham exposure, whereas controls reported more symptoms during the UMTS exposure. During double-blind tests the GSM signal did not have any effect on either group. Sensitive participants did report elevated levels of arousal during the UMTS condition, whereas the number or severity of symptoms experienced did not increase. Physiological measures did not differ across the three exposure conditions for either group. Conclusions Short-term exposure to a typical GSM base station-like signal did not affect well-being or physiological functions in sensitive or control individuals. Sensitive individuals reported elevated levels of arousal when exposed to a UMTS signal. Further analysis, however, indicated that this difference was likely to be due to the effect of order of exposure rather than the exposure itself. PMID:18007992
The weaker effects of First-order mean motion resonances in intermediate inclinations
NASA Astrophysics Data System (ADS)
Chen, YuanYuan; Quillen, Alice C.; Ma, Yuehua; Chinese Scholar Council, the National Natural Science Foundation of China, the Natural Science Foundation of Jiangsu Province, the Minor Planet Foundation of the Purple Mountain Observatory
2017-10-01
During planetary migration, a planet or planetesimal can be captured into a low-order mean motion resonance with another planet. Using a second-order expansion of the disturbing function in eccentricity and inclination, we explore the sensitivity of the capture probability of first-order mean motion resonances to orbital inclination. We find that second-order inclination contributions affect the resonance strengths, reducing them at intermediate inclinations of around 10-40° for major first-order resonances. We also integrated the Hamilton's equations with arbitrary initial arguments, and provided the varying tendencies of resonance capture probabilities versus orbital inclinations for different resonances and different particle or planetary eccentricities. Resonance-weaker ranges in inclinations generally appear at the places where resonance strengths are low, around 10-40° in general. The weaker ranges disappear with a higher particle eccentricity (≳0.05) or planetary eccentricity (≳0.05). These resonance-weaker ranges in inclinations implies that intermediate-inclination objects are less likely to be disturbed or captured into the first-order resonances, which would make them entering into the chaotic area around Neptune with a larger fraction than those with low inclinations, during the epoch of Neptune's outward migration. The privilege of high-inclination particles leave them to be more likely captured into Neptune Trojans, which might be responsible for the unexpected high fraction of high-inclination Neptune Trojans.
Optimal guidance law development for an advanced launch system
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Hodges, Dewey H.
1990-01-01
A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.
Prediction of parturition in bitches utilizing continuous vaginal temperature measurement.
Geiser, B; Burfeind, O; Heuwieser, W; Arlt, S
2014-02-01
The objective of this study was to determine sensitivity and specificity of a body temperature decline in bitches to predict parturition. Temperature loggers were placed into the vaginal cavity of 16 pregnant bitches on day 56-61 after estimated ovulation or first mating. This measurement technique has been validated previously and enabled continuous sampling of body temperature. The temperature loggers were expelled from the vagina before delivery of the first pup. The computed values for specificity (77-92%) were higher than sensitivity (53-69%), indicating a more precise prognosis of parturition not occurring. In conclusion, our findings may assist interpreting vaginal temperature measurements in order to predict parturition in bitches. © 2013 Blackwell Verlag GmbH.
Cherry, Simon R; Jones, Terry; Karp, Joel S; Qi, Jinyi; Moses, William W; Badawi, Ramsey D
2018-01-01
PET is widely considered the most sensitive technique available for noninvasively studying physiology, metabolism, and molecular pathways in the living human being. However, the utility of PET, being a photon-deficient modality, remains constrained by factors including low signal-to-noise ratio, long imaging times, and concerns about radiation dose. Two developments offer the potential to dramatically increase the effective sensitivity of PET. First by increasing the geometric coverage to encompass the entire body, sensitivity can be increased by a factor of about 40 for total-body imaging or a factor of about 4-5 for imaging a single organ such as the brain or heart. The world's first total-body PET/CT scanner is currently under construction to demonstrate how this step change in sensitivity affects the way PET is used both in clinical research and in patient care. Second, there is the future prospect of significant improvements in timing resolution that could lead to further effective sensitivity gains. When combined with total-body PET, this could produce overall sensitivity gains of more than 2 orders of magnitude compared with existing state-of-the-art systems. In this article, we discuss the benefits of increasing body coverage, describe our efforts to develop a first-generation total-body PET/CT scanner, discuss selected application areas for total-body PET, and project the impact of further improvements in time-of-flight PET. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Modeling tree growth and stable isotope ratios of white spruce in western Alaska.
NASA Astrophysics Data System (ADS)
Boucher, Etienne; Andreu-Hayles, Laia; Field, Robert; Oelkers, Rose; D'Arrigo, Rosanne
2017-04-01
Summer temperatures are assumed to exert a dominant control on physiological processes driving forest productivity in interior Alaska. However, despite the recent warming of the last few decades, numerous lines of evidence indicate that the enhancing effect of summer temperatures on high latitude forest populations has been weakening. First, satellite-derived indices of photosynthetic activity, such as the Normalized-Difference Vegetation Index (NDVI, 1982-2005), show overall declines in productivity in the interior boreal forests. Second, some white spruce tree ring series strongly diverge from summer temperatures during the second half of the 20th century, indicating a persistent loss of temperature sensitivity of tree ring proxies. Thus, the physiological response of treeline forests to ongoing climate change cannot be accurately predicted, especially from correlation analysis. Here, we make use of a process-based dendroecological model (MAIDENiso) to elucidate the complex linkages between global warming and increases in atmospheric CO2 concentration [CO2] with the response of treeline white spruce stands in interior Alaska (Seward). In order to fully capture the array of processes controlling tree growth in the area, multiple physiological indicators of white spruce productivity are used as target variables: NDVI images, ring widths (RW), maximum density (MXD) and newly measured carbon and oxygen stable isotope ratios from ring cellulose. Based on these data, we highlight the processes and mechanisms responsible for the apparent loss of sensitivity of white spruce trees to recent climate warming and [CO2] increase in order to elucidate the sensitivity and vulnerability of these trees to climate change.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
Wojcieszak, Robert; Raj, Gijo
2014-01-01
Summary CdS quantum dots were grown on mesoporous TiO2 films by successive ionic layer adsorption and reaction processes in order to obtain CdS particles of various sizes. AFM analysis shows that the growth of the CdS particles is a two-step process. The first step is the formation of new crystallites at each deposition cycle. In the next step the pre-deposited crystallites grow to form larger aggregates. Special attention is paid to the estimation of the CdS particle size by X-ray photoelectron spectroscopy (XPS). Among the classical methods of characterization the XPS model is described in detail. In order to make an attempt to validate the XPS model, the results are compared to those obtained from AFM analysis and to the evolution of the band gap energy of the CdS nanoparticles as obtained by UV–vis spectroscopy. The results showed that XPS technique is a powerful tool in the estimation of the CdS particle size. In conjunction with these results, a very good correlation has been found between the number of deposition cycles and the particle size. PMID:24605274
First-Order or Second-Order Kinetics? A Monte Carlo Answer
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2005-01-01
Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…
NASA Astrophysics Data System (ADS)
Masoomi, Mohsen; Katbab, Ali Asghar; Nazockdast, Hossein
2006-09-01
Attempts have been made for the first time to prepare a friction material with the characteristic of thermal sensitive modulus, by the inclusion of thermoplastic elastomers (TPE) as viscoelastic polymeric materials into the formulation in order to the increase the damping behavior of the cured friction material. Styrene butadiene styrene (SBS), styrene ethylene butylene styrene (SEBS) and nitrile rubber/polyvinyl chloride (NBR/PVC) blend system were used as TPE materials. In order to evaluate the viscoelastic parameters such as loss factor (tan δ) and storage modulus (E‧) for the friction material, dynamic mechanical analyzer (DMA) were used. Natural frequencies and mode shapes of friction material and brake disc were determined by modal analysis. However, NBR/PVC and SEBS were found to be much more effective in damping behavior. The results from this comparative study suggest that the damping characteristics of commercial friction materials can be strongly affected by the TPE ingredients. This investigation also confirmed that the specimens with high TPE content had low noise propensity.
Practical steganalysis of digital images: state of the art
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2002-04-01
Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.
Cell Electrosensitization Exists Only in Certain Electroporation Buffers.
Dermol, Janja; Pakhomova, Olga N; Pakhomov, Andrei G; Miklavčič, Damijan
2016-01-01
Electroporation-induced cell sensitization was described as the occurrence of a delayed hypersensitivity to electric pulses caused by pretreating cells with electric pulses. It was achieved by increasing the duration of the electroporation treatment at the same cumulative energy input. It could be exploited in electroporation-based treatments such as electrochemotherapy and tissue ablation with irreversible electroporation. The mechanisms responsible for cell sensitization, however, have not yet been identified. We investigated cell sensitization dynamics in five different electroporation buffers. We split a pulse train into two trains varying the delay between them and measured the propidium uptake by fluorescence microscopy. By fitting the first-order model to the experimental results, we determined the uptake due to each train (i.e. the first and the second) and the corresponding resealing constant. Cell sensitization was observed in the growth medium but not in other tested buffers. The effect of pulse repetition frequency, cell size change, cytoskeleton disruption and calcium influx do not adequately explain cell sensitization. Based on our results, we can conclude that cell sensitization is a sum of several processes and is buffer dependent. Further research is needed to determine its generality and to identify underlying mechanisms.
Cell Electrosensitization Exists Only in Certain Electroporation Buffers
Dermol, Janja; Pakhomova, Olga N.; Pakhomov, Andrei G.; Miklavčič, Damijan
2016-01-01
Electroporation-induced cell sensitization was described as the occurrence of a delayed hypersensitivity to electric pulses caused by pretreating cells with electric pulses. It was achieved by increasing the duration of the electroporation treatment at the same cumulative energy input. It could be exploited in electroporation-based treatments such as electrochemotherapy and tissue ablation with irreversible electroporation. The mechanisms responsible for cell sensitization, however, have not yet been identified. We investigated cell sensitization dynamics in five different electroporation buffers. We split a pulse train into two trains varying the delay between them and measured the propidium uptake by fluorescence microscopy. By fitting the first-order model to the experimental results, we determined the uptake due to each train (i.e. the first and the second) and the corresponding resealing constant. Cell sensitization was observed in the growth medium but not in other tested buffers. The effect of pulse repetition frequency, cell size change, cytoskeleton disruption and calcium influx do not adequately explain cell sensitization. Based on our results, we can conclude that cell sensitization is a sum of several processes and is buffer dependent. Further research is needed to determine its generality and to identify underlying mechanisms. PMID:27454174
NASA Astrophysics Data System (ADS)
Qian, Qingkai; Zhang, Zhaofu; Chen, Kevin J.
2018-04-01
Acoustic-phonon Raman scattering, as a defect-induced second-order Raman scattering process (with incident photon scattered by one acoustic phonon at the Brillouin-zone edge and the momentum conservation fulfilled by defect scattering), is used as a sensitive tool to study the defects of transition-metal dichalcogenides (TMDs). Moreover, second-order Raman scattering processes are closely related to the valley depolarization of single-layer TMDs in potential valleytronic applications. Here, the layer dependence of second-order Raman intensity of Mo S2 and WS e2 is studied. The electronic band structures of Mo S2 and WS e2 are modified by the layer thicknesses; hence, the resonance conditions for both first-order and second-order Raman scattering processes are tuned. In contrast to the first-order Raman scattering, second-order Raman scattering of Mo S2 and WS e2 involves additional intervalley scattering of electrons by phonons with large momenta. As a result, the electron states that contribute most to the second-order Raman intensity are different from that to first-order process. A weaker layer-tuned resonance enhancement of second-order Raman intensity is observed for both Mo S2 and WS e2 . Specifically, when the incident laser has photon energy close to the optical band gap and the Raman spectra are normalized by the first-order Raman peaks, single-layer Mo S2 or WS e2 has the strongest second-order Raman intensity. This layer-dependent second-order Raman intensity can be further utilized as an indicator to identify the layer number of Mo S2 and WS e2 .
Compact integrated dc SQUID gradiometer
NASA Astrophysics Data System (ADS)
de Waal, V. J.; Klapwijk, T. M.
1982-10-01
An all-niobium integrated system of first-order gradiometer and dc suprconducting quantum interference device (SQUID) has been developed. It is relatively simple to fabricate, has an overall size of 17×12 mm and a sensitivity of 3.5×10-12 T m-1 Hz-1/2.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Double dissociation between first- and second-order processing.
Allard, Rémy; Faubert, Jocelyn
2007-04-01
To study the difference of sensitivity to luminance- (LM) and contrast-modulated (CM) stimuli, we compared LM and CM detection thresholds in LM- and CM-noise conditions. The results showed a double dissociation (no or little inter-attribute interaction) between the processing of these stimuli, which implies that both stimuli must be processed, at least at some point, by separate mechanisms and that both stimuli are not merged after a rectification process. A second experiment showed that the internal equivalent noise limiting the CM sensitivity was greater than the one limiting the carrier sensitivity, which suggests that the internal noise occurring before the rectification process is not limiting the CM sensitivity. These results support the hypothesis that a suboptimal rectification process partially explains the difference of LM and CM sensitivity.
Differential metabolome analysis of field-grown maize kernels in response to drought stress
USDA-ARS?s Scientific Manuscript database
Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...
Statkiewicz-Barabach, Gabriela; Olszewski, Jacek; Mergo, Pawel; Urbanczyk, Waclaw.
2017-01-01
We present a comprehensive study of an in-line Mach-Zehnder intermodal interferometer fabricated in a boron-doped two-mode highly birefringent microstructured fiber. We observed different interference signals at the output of the interferometer, related to the intermodal interference of the fundamental and the first order modes of the orthogonal polarizations and a beating of the polarimetric signal related to the difference in the group modal birefringence between the fundamental and the first order modes, respectively. The proposed interferometer was tested for measurements of hydrostatic pressure and temperature for different alignments of the input polarizer with no analyzer at the output. The sensitivities to hydrostatic pressure of the intermodal interference signals for x- and y-polarizations had an opposite sign and were equal to 0.229 nm/MPa and −0.179 nm/MPa, respectively, while the temperature sensitivities for both polarizations were similar and equal 0.020 nm/°C and 0.019 nm/°C. In the case of pressure, for the simultaneous excitation of both polarization modes, we observed a displacement of intermodal fringes with a sensitivity depending on the azimuth of the input polarization state, as well as on the displacement of their envelope with a sensitivity of 2.14 nm/MPa, accompanied by a change in the fringes visibility. Such properties of the proposed interferometer allow for convenient adjustments to the pressure sensitivity of the intermodal fringes and possible applications for the simultaneous interrogation of temperature and pressure. PMID:28718796
Contreras-Hernández, Iris; Mould-Quevedo, Joaquín F; Torres-González, Rubén; Goycochea-Robles, María Victoria; Pacheco-Domínguez, Reyna Lizette; Sánchez-García, Sergio; Mejía-Aranguré, Juan Manuel; Garduño-Espinosa, Juan
2008-11-12
Osteoarthritis (OA) is one of the main causes of disability worldwide, especially in persons >55 years of age. Currently, controversy remains about the best therapeutic alternative for this disease when evaluated from a cost-effectiveness viewpoint. For Social Security Institutions in developing countries, it is very important to assess what drugs may decrease the subsequent use of medical care resources, considering their adverse events that are known to have a significant increase in medical care costs of patients with OA. Three treatment alternatives were compared: celecoxib (200 mg twice daily), non-selective NSAIDs (naproxen, 500 mg twice daily; diclofenac, 100 mg twice daily; and piroxicam, 20 mg/day) and acetaminophen, 1000 mg twice daily. The aim of this study was to identify the most cost-effective first-choice pharmacological treatment for the control of joint pain secondary to OA in patients treated at the Instituto Mexicano del Seguro Social (IMSS). A cost-effectiveness assessment was carried out. A systematic review of the literature was performed to obtain transition probabilities. In order to evaluate analysis robustness, one-way and probabilistic sensitivity analyses were conducted. Estimations were done for a 6-month period. Treatment demonstrating the best cost-effectiveness results [lowest cost-effectiveness ratio $17.5 pesos/patient ($1.75 USD)] was celecoxib. According to the one-way sensitivity analysis, celecoxib would need to markedly decrease its effectiveness in order for it to not be the optimal treatment option. In the probabilistic analysis, both in the construction of the acceptability curves and in the estimation of net economic benefits, the most cost-effective option was celecoxib. From a Mexican institutional perspective and probably in other Social Security Institutions in similar developing countries, the most cost-effective option for treatment of knee and/or hip OA would be celecoxib.
Contreras-Hernández, Iris; Mould-Quevedo, Joaquín F; Torres-González, Rubén; Goycochea-Robles, María Victoria; Pacheco-Domínguez, Reyna Lizette; Sánchez-García, Sergio; Mejía-Aranguré, Juan Manuel; Garduño-Espinosa, Juan
2008-01-01
Background Osteoarthritis (OA) is one of the main causes of disability worldwide, especially in persons >55 years of age. Currently, controversy remains about the best therapeutic alternative for this disease when evaluated from a cost-effectiveness viewpoint. For Social Security Institutions in developing countries, it is very important to assess what drugs may decrease the subsequent use of medical care resources, considering their adverse events that are known to have a significant increase in medical care costs of patients with OA. Three treatment alternatives were compared: celecoxib (200 mg twice daily), non-selective NSAIDs (naproxen, 500 mg twice daily; diclofenac, 100 mg twice daily; and piroxicam, 20 mg/day) and acetaminophen, 1000 mg twice daily. The aim of this study was to identify the most cost-effective first-choice pharmacological treatment for the control of joint pain secondary to OA in patients treated at the Instituto Mexicano del Seguro Social (IMSS). Methods A cost-effectiveness assessment was carried out. A systematic review of the literature was performed to obtain transition probabilities. In order to evaluate analysis robustness, one-way and probabilistic sensitivity analyses were conducted. Estimations were done for a 6-month period. Results Treatment demonstrating the best cost-effectiveness results [lowest cost-effectiveness ratio $17.5 pesos/patient ($1.75 USD)] was celecoxib. According to the one-way sensitivity analysis, celecoxib would need to markedly decrease its effectiveness in order for it to not be the optimal treatment option. In the probabilistic analysis, both in the construction of the acceptability curves and in the estimation of net economic benefits, the most cost-effective option was celecoxib. Conclusion From a Mexican institutional perspective and probably in other Social Security Institutions in similar developing countries, the most cost-effective option for treatment of knee and/or hip OA would be celecoxib. PMID:19014495
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Aderibigbe, Segun A; Adegoke, Olajire A; Idowu, Olakunle S; Olaleye, Sefiu O
2012-01-01
The study is a description of a sensitive spectrophotometric determination of aceclofenac following azo dye formation with 4-carboxyl-2,6-dinitrobenzenediazonium ion (CDNBD). Spot test and thin layer chromatography revealed the formation of a new compound distinct from CDNBD and aceclofenac. Optimization studies established a reaction time of 5 min at 30 degrees C after vortex mixing the drug/CDNBD for 10 s. An absorption maximum of 430 nm was selected as analytical wavelength. A linear response was observed over 1.2-4.8 μg/mL of aceclofenac with a correlation coefficient of 0.9983 and the drug combined with CDNBD at stoichiometric ratio of 2 : 1. The method has a limit of detection of 0.403 μg/mL, limit of quantitation of 1.22 μg/mL and is reproducible over a three day assessment. The method gave Sandell's sensitivity of 3.279 ng/cm2. Intra- and inter-day accuracies (in terms of errors) were less than 6% while precisions were of the order of 0.03-1.89% (RSD). The developed spectrophotometric method is of equivalent accuracy (p > 0.05) with British Pharmacopoeia, 2010 potentiometric method. It has the advantages of speed, simplicity, sensitivity and more affordable instrumentation and could found application as a rapid and sensitive analytical method of aceclofenac. It is the first described method by azo dye derivatization for the analysis of aceclofenac in bulk samples and dosage forms.
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Günther, Michael; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Correlation between experimental human and murine skin sensitization induction thresholds.
Api, Anne Marie; Basketter, David; Lalko, Jon
2015-01-01
Quantitative risk assessment for skin sensitization is directed towards the determination of levels of exposure to known sensitizing substances that will avoid the induction of contact allergy in humans. A key component of this work is the predictive identification of relative skin sensitizing potency, achieved normally by the measurement of the threshold (the "EC3" value) in the local lymph node assay (LLNA). In an extended series of studies, the accuracy of this murine induction threshold as the predictor of the absence of a sensitizing effect has been verified by conduct of a human repeated insult patch test (HRIPT). Murine and human thresholds for a diverse set of 57 fragrance chemicals spanning approximately four orders of magnitude variation in potency have been compared. The results confirm that there is a useful correlation, with the LLNA EC3 value helping particularly to identify stronger sensitizers. Good correlation (with half an order of magnitude) was seen with three-quarters of the dataset. The analysis also helps to identify potential outlier types of (fragrance) chemistry, exemplified by hexyl and benzyl salicylates (an over-prediction) and trans-2-hexenal (an under-prediction).
Saqib, Muhammad; Qi, Liming; Hui, Pan; Nsabimana, Anaclet; Halawa, Mohamed Ibrahim; Zhang, Wei; Xu, Guobao
2018-01-15
N-hydroxyphthalimide (NHPI), a well known reagent in organic synthesis and biochemical applications, has been developed as a stable and efficient chemiluminescence coreactant for the first time. It reacts with luminol much faster than N-hydroxysuccinimide, eliminating the need of a prereaction coil used in N-hydroxysuccinimide system. Without using prereaction coil, the chemiluminescence peak intensities of luminol-NHPI system are about 102 and 26 times greater than that of luminol-N-hydroxysuccinimide system and classical luminol-hydrogen peroxide system, respectively. The luminol-NHPI system achieves the highly sensitive detection of luminol (LOD = 70pM) and NHPI (LOD = 910nM). Based on their excellent quenching efficiencies, superoxide dismutase and uric acid are sensitively detected with LODs of 3ng/mL and 10pM, respectively. Co 2+ is also detected a LOD of 30pM by its remarkable enhancing effect. Noteworthily, our method is at least 4 orders of magnitude more sensitive than previously reported uric acid detection methods, and can detect uric acid in human urine and Co 2+ in tap and lake water real samples with excellent recoveries in the range of 96.35-102.70%. This luminol-NHPI system can be an important candidate for biochemical, clinical and environmental analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A
2010-12-01
This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.
Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidab...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... does not include any sensitive personal information, like anyone's Social Security number, date of... the agreement or make final the agreement's proposed order. This matter involves PPG's marketing and... prevent PPG from engaging in similar acts and practices in the future. Part I addresses the marketing of...
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W
2009-01-01
Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.
Hori, Yusuke S; Fukuhara, Toru; Aoi, Mizuho; Oda, Kazunori; Shinno, Yoko
2018-06-01
Metastatic glioblastoma is a rare condition, and several studies have reported the involvement of multiple organs including the lymph nodes, liver, and lung. The lung and pleura are reportedly the most frequent sites of metastasis, and diagnosis using less invasive tools such as cytological analysis with fine needle aspiration biopsy is challenging. Cytological analysis of fluid specimens tends to be negative because of the small number of cells obtained, whereas the cell block technique reportedly has higher sensitivity because of a decrease in cellular dispersion. Herein, the authors describe a patient with a history of diffuse astrocytoma who developed intractable, progressive accumulation of pleural fluid. Initial cytological analysis of the pleural effusion obtained by thoracocentesis was negative, but reanalysis using the cell block technique revealed the presence of glioblastoma cells. This is the first report to suggest the effectiveness of the cell block technique in the diagnosis of extracranial glioblastoma using pleural effusion. In patients with a history of glioma, the presence of extremely intractable pleural effusion warrants cytological analysis of the fluid using this technique in order to initiate appropriate chemotherapy.
Exogenous attention enhances 2nd-order contrast sensitivity
Barbot, Antoine; Landy, Michael S.; Carrasco, Marisa
2011-01-01
Natural scenes contain a rich variety of contours that the visual system extracts to segregrate the retinal image into perceptually coherent regions. Covert spatial attention helps extract contours by enhancing contrast sensitivity for 1st-order, luminance-defined patterns at attended locations, while reducing sensitivity at unattended locations, relative to neutral attention allocation. However, humans are also sensitive to 2nd-order patterns such as spatial variations of texture, which are predominant in natural scenes and cannot be detected by linear mechanisms. We assess whether and how exogenous attention—the involuntary and transient capture of spatial attention—affects the contrast sensitivity of channels sensitive to 2nd-order, texture-defined patterns. Using 2nd-order, texture-defined stimuli, we demonstrate that exogenous attention increases 2nd-order contrast sensitivity at the attended location, while decreasing it at unattended locations, relative to a neutral condition. By manipulating both 1st- and 2nd-order spatial frequency, we find that the effects of attention depend both on 2nd-order spatial frequency of the stimulus and the observer’s 2nd-order spatial resolution at the target location. At parafoveal locations, attention enhances 2nd-order contrast sensitivity to high, but not to low 2nd-order spatial frequencies; at peripheral locations attention also enhances sensitivity to low 2nd-order spatial frequencies. Control experiments rule out the possibility that these effects might be due to an increase in contrast sensitivity at the 1st-order stage of visual processing. Thus, exogenous attention affects 2nd-order contrast sensitivity at both attended and unattended locations. PMID:21356228
Sensitivity of lod scores to changes in diagnostic status.
Hodge, S E; Greenberg, D A
1992-01-01
This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis. PMID:1570835
Sensitivity of lod scores to changes in diagnostic status.
Hodge, S E; Greenberg, D A
1992-05-01
This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis.
Locality and Word Order in Active Dependency Formation in Bangla.
Chacón, Dustin A; Imtiaz, Mashrur; Dasgupta, Shirsho; Murshed, Sikder M; Dan, Mina; Phillips, Colin
2016-01-01
Research on filler-gap dependencies has revealed that there are constraints on possible gap sites, and that real-time sentence processing is sensitive to these constraints. This work has shown that comprehenders have preferences for potential gap sites, and immediately detect when these preferences are not met. However, neither the mechanisms that select preferred gap sites nor the mechanisms used to detect whether these preferences are met are well-understood. In this paper, we report on three experiments in Bangla, a language in which gaps may occur in either a pre-verbal embedded clause or a post-verbal embedded clause. This word order variation allows us to manipulate whether the first gap linearly available is contained in the same clause as the filler, which allows us to dissociate structural locality from linear locality. In Experiment 1, an untimed ambiguity resolution task, we found a global bias to resolve a filler-gap dependency with the first gap linearly available, regardless of structural hierarchy. In Experiments 2 and 3, which use the filled-gap paradigm, we found sensitivity to disruption only when the blocked gap site is both structurally and linearly local, i.e., the filler and the gap site are contained in the same clause. This suggests that comprehenders may not show sensitivity to the disruption of all preferred gap resolutions.
Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy
2014-10-15
Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.
Study of nitrogen flowing afterglow with mercury vapor injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazánková, V., E-mail: mazankova@fch.vutbr.cz; Krčma, F.; Trunec, D.
2014-10-21
The reaction kinetics in nitrogen flowing afterglow with mercury vapor addition was studied by optical emission spectroscopy. The DC flowing post-discharge in pure nitrogen was created in a quartz tube at the total gas pressure of 1000 Pa and discharge power of 130 W. The mercury vapors were added into the afterglow at the distance of 30 cm behind the active discharge. The optical emission spectra were measured along the flow tube. Three nitrogen spectral systems – the first positive, the second positive, and the first negative, and after the mercury vapor addition also the mercury resonance line at 254more » nm in the spectrum of the second order were identified. The measurement of the spatial dependence of mercury line intensity showed very slow decay of its intensity and the decay rate did not depend on the mercury concentration. In order to explain this behavior, a kinetic model for the reaction in afterglow was developed. This model showed that the state Hg(6 {sup 3}P{sub 1}), which is the upper state of mercury UV resonance line at 254 nm, is produced by the excitation transfer from nitrogen N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastables to mercury atoms. However, the N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastables are also produced by the reactions following the N atom recombination, and this limits the decay of N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastable concentration and results in very slow decay of mercury resonance line intensity. It was found that N atoms are the most important particles in this late nitrogen afterglow, their volume recombination starts a chain of reactions which produce excited states of molecular nitrogen. In order to explain the decrease of N atom concentration, it was also necessary to include the surface recombination of N atoms to the model. The surface recombination was considered as a first order reaction and wall recombination probability γ = (1.35 ± 0.04) × 10{sup −6} was determined from the experimental data. Also sensitivity analysis was applied for the analysis of kinetic model in order to reveal the main control parameters in the model.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global DNA methylation analysis using methyl-sensitive amplification polymorphism (MSAP).
Yaish, Mahmoud W; Peng, Mingsheng; Rothstein, Steven J
2014-01-01
DNA methylation is a crucial epigenetic process which helps control gene transcription activity in eukaryotes. Information regarding the methylation status of a regulatory sequence of a particular gene provides important knowledge of this transcriptional control. DNA methylation can be detected using several methods, including sodium bisulfite sequencing and restriction digestion using methylation-sensitive endonucleases. Methyl-Sensitive Amplification Polymorphism (MSAP) is a technique used to study the global DNA methylation status of an organism and hence to distinguish between two individuals based on the DNA methylation status determined by the differential digestion pattern. Therefore, this technique is a useful method for DNA methylation mapping and positional cloning of differentially methylated genes. In this technique, genomic DNA is first digested with a methylation-sensitive restriction enzyme such as HpaII, and then the DNA fragments are ligated to adaptors in order to facilitate their amplification. Digestion using a methylation-insensitive isoschizomer of HpaII, MspI is used in a parallel digestion reaction as a loading control in the experiment. Subsequently, these fragments are selectively amplified by fluorescently labeled primers. PCR products from different individuals are compared, and once an interesting polymorphic locus is recognized, the desired DNA fragment can be isolated from a denaturing polyacrylamide gel, sequenced and identified based on DNA sequence similarity to other sequences available in the database. We will use analysis of met1, ddm1, and atmbd9 mutants and wild-type plants treated with a cytidine analogue, 5-azaC, or zebularine to demonstrate how to assess the genetic modulation of DNA methylation in Arabidopsis. It should be noted that despite the fact that MSAP is a reliable technique used to fish for polymorphic methylated loci, its power is limited to the restriction recognition sites of the enzymes used in the genomic DNA digestion.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Kaluarachchi, Udhara S.; Deng, Yuhang; Besser, Matthew F.; ...
2017-06-09
Transport and magnetic studies of PbTaSe 2 under pressure suggest the existence of two superconducting phases with the low temperature phase boundary at ~ 0.25 GPa that is defined by a very sharp, first order, phase transition. The first order phase transition line can be followed via pressure dependent resistivity measurements, and is found to be near 0.12 GPa near room temperature. Transmission electron microscopy and x-ray diffraction at elevated temperatures confirm that this first order phase transition is structural and occurs at ambient pressure near ~ 425 K. The new, high temperature/high pressure phase has a similar crystal structuremore » and slightly lower unit cell volume relative to the ambient pressure, room temperature structure. Based on first-principles calculations this structure is suggested to be obtained by shifting the Pb atoms from the 1 a to 1 e Wyckoff position without changing the positions of Ta and Se atoms. PbTaSe 2 has an exceptionally pressure sensitive, structural phase transition with Δ T s / Δ P ≈ -1400 K/GPa near room temperature, and ≈ -1700 K/GPa near 4 K. This first order transition causes a ~ 1 K (~ 25 % ) steplike decrease in T c as pressure is increased through 0.25 GPa.« less
Cylindrical optical resonators: fundamental properties and bio-sensing characteristics
NASA Astrophysics Data System (ADS)
Khozeymeh, Foroogh; Razaghi, Mohammad
2018-04-01
In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).
Fractional-order Fourier analysis for ultrashort pulse characterization.
Brunel, Marc; Coetmellec, Sébastien; Lelek, Mickael; Louradour, Frédéric
2007-06-01
We report what we believe to be the first experimental demonstration of ultrashort pulse characterization using fractional-order Fourier analysis. The analysis is applied to the interpretation of spectral interferometry resolved in time (SPIRIT) traces [which are spectral phase interferometry for direct electric field reconstruction (SPIDER)-like interferograms]. First, the fractional-order Fourier transformation is shown to naturally allow the determination of the cubic spectral phase coefficient of pulses to be analyzed. A simultaneous determination of both cubic and quadratic spectral phase coefficients of the pulses using the fractional-order Fourier series expansion is further demonstrated. This latter technique consists of localizing relative maxima in a 2D cartography representing decomposition coefficients. It is further used to reconstruct or filter SPIRIT traces.
Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H
2015-01-01
Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Sáiz, Jorge; García-Roa, Roberto; Martín, José; Gómara, Belén
2017-09-08
Chemical signaling is a widespread mode of communication among living organisms that is used to establish social organization, territoriality and/or for mate choice. In lizards, femoral and precloacal glands are important sources of chemical signals. These glands protrude chemical secretions used to mark territories and also, to provide valuable information from the bearer to other individuals. Ecologists have studied these chemical secretions for decades in order to increase the knowledge of chemical communication in lizards. Although several studies have focused on the chemical analysis of these secretions, there is a lack of faster, more sensitive and more selective analytical methodologies for their study. In this work a new GC coupled to tandem triple quadrupole MS (GC-QqQ (MS/MS)) methodology is developed and proposed for the target study of 12 relevant compounds often found in lizard secretions (i.e. 1-hexadecanol, palmitic acid, 1-octadecanol, oleic acid, stearic acid, 1-tetracosanol, squalene, cholesta-3,5-diene, α-tocopherol, cholesterol, ergosterol and campesterol). The method baseline-separated the analytes in less than 7min, with instrumental limits of detection ranging from 0.04 to 6.0ng/mL. It was possible to identify differences in the composition of the samples from the lizards analyzed, which depended on the species, the habitat occupied and the diet of the individuals. Moreover, α-tocopherol has been determined for the first time in a lizard species, which was thought to lack its expression in chemical secretions. Globally, the methodology has been proven to be a valuable alternative to other published methods with important improvements in terms of analysis time, sensitivity, and selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Receiver operating characteristic analysis of age-related changes in lineup performance.
Humphries, Joyce E; Flowe, Heather D
2015-04-01
In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Crook, Nigel P.; Hoon, Stephen R.; Taylor, Kevin G.; Perry, Chris T.
2002-05-01
This study investigates the application of high sensitivity electron spin resonance (ESR) to environmental magnetism in conjunction with the more conventional techniques of magnetic susceptibility, vibrating sample magnetometry (VSM) and chemical compositional analysis. Using these techniques we have studied carbonate sediment samples from Discovery Bay, Jamaica, which has been impacted to varying degrees by a bauxite loading facility. The carbonate sediment samples contain magnetic minerals ranging from moderate to low concentrations. The ESR spectra for all sites essentially contain three components. First, a six-line spectra centred around g = 2 resulting from Mn2+ ions within a carbonate matrix; second a g = 4.3 signal from isolated Fe3+ ions incorporated as impurities within minerals such as gibbsite, kaolinite or quartz; third a ferrimagnetic resonance with a maxima at 230 mT resulting from the ferrimagnetic minerals present within the bauxite contamination. Depending upon the location of the sites within the embayment these signals vary in their relative amplitude in a systematic manner related to the degree of bauxite input. Analysis of the ESR spectral components reveals linear relationships between the amplitude of the Mn2+ and ferrimagnetic signals and total Mn and Fe concentrations. To assist in determining the origin of the ESR signals coral and bauxite reference samples were employed. Coral representative of the matrix of the sediment was taken remote from the bauxite loading facility whilst pure bauxite was collected from nearby mining facilities. We find ESR to be a very sensitive technique particularly appropriate to magnetic analysis of ferri- and para-magnetic components within environmental samples otherwise dominated by diamagnetic (carbonate) minerals. When employing typical sample masses of 200 mg the practical detection limit of ESR to ferri- and para-magnetic minerals within a diamagnetic carbonate matrix is of the order of 1 ppm and 1 ppb respectively, approximately 102 and 105 times the sensitivity achievable employing the VSM in our laboratory.
Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging
NASA Astrophysics Data System (ADS)
Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał
2015-03-01
Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.
Sensitivity study on durability variables of marine concrete structures
NASA Astrophysics Data System (ADS)
Zhou, Xin'gang; Li, Kefei
2013-06-01
In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.
Gallagher, M; Turner, E C; Kamber, B S
2015-07-01
Pre-Cambrian atmospheric and oceanic redox evolutions are expressed in the inventory of redox-sensitive trace metals in marine sedimentary rocks. Most of the currently available information was derived from deep-water sedimentary rocks (black shale/banded iron formation). Many of the studied trace metals (e.g. Mo, U, Ni and Co) are sensitive to the composition of the exposed land surface and prevailing weathering style, and their oceanic inventory ultimately depends on the terrestrial flux. The validity of claims for increased/decreased terrestrial fluxes has remained untested as far as the shallow-marine environment is concerned. Here, the first systematic study of trace metal inventories of the shallow-marine environment by analysis of microbial carbonate-hosted pyrite, from ca. 2.65-0.52 Ga, is presented. A petrographic survey revealed a first-order difference in preservation of early diagenetic pyrite. Microbial carbonates formed before the 2.4 Ga great oxygenation event (GOE) are much richer in pyrite and contain pyrite grains of greater morphological variability but lesser chemical substitution than samples deposited after the GOE. This disparity in pyrite abundance and morphology is mirrored by the qualitative degree of preservation of organic matter (largely as kerogen). Thus, it seems that in microbial carbonates, pyrite formation and preservation were related to presence and preservation of organic C. Several redox-sensitive trace metals show interpretable temporal trends supporting earlier proposals derived from deep-water sedimentary rocks. Most notably, the shallow-water pyrite confirms a rise in the oceanic Mo inventory across the pre-Cambrian-Cambrian boundary, implying the establishment of efficient deep-ocean ventilation. The carbonate-hosted pyrite also confirms the Neoarchaean and early Palaeoproterozoic ocean had higher Ni concentration, which can now more firmly be attributed to a greater proportion of magnesian volcanic rock on land rather than a stronger hydrothermal flux of Ni. Additionally, systematic trends are reported for Co, As, and Zn, relating to terrestrial flux and oceanic productivity. © 2015 John Wiley & Sons Ltd.
A comparison of zero-order, first-order, and monod biotransformation models
Bekins, B.A.; Warren, E.; Godsy, E.M.
1998-01-01
Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.
NASA Technical Reports Server (NTRS)
Mulholland, J. Derral; Singer, S. Fred; Oliver, John P.; Weinberg, Jerry L.; Cooke, William J.; Kassel, Philip C.; Wortman, Jim J.; Montague, Nancy L.; Kinard, William H.
1991-01-01
During the first 12 months of the Long Duration Exposure Facility (LDEF) mission, the Interplanetary Dust Experiment (IDE) recorded over 15,000 total impacts on six orthogonal faces with a time resolution on the order of 15 to 20 seconds. When combined with the orbital data and the stabilized configuration of the spacecraft, this permits a detailed analysis of the micro-particulate environment. The functional status of each of the 459 detectors was monitored every 2.4 hours, and post-flight analyses of these data has now permitted an evaluation of the effective active detection area as a function of time, panel by panel and separately for the two sensitivity levels. Thus, total impacts were transformed into areal fluxes, and are presented here for the first time. Also discussed are possible effects of these fluxes on previously announced results: apparent debris events, meteor stream detections, and beta meteoroids in observationally significant numbers.
NASA Technical Reports Server (NTRS)
Eckel, J. S.; Crabtree, M. S.
1984-01-01
Analytical and subjective techniques that are sensitive to the information transmission and processing requirements of individual communications-related tasks are used to assess workload imposed on the aircrew by A-10 communications requirements for civilian transport category aircraft. Communications-related tasks are defined to consist of the verbal exchanges between crews and controllers. Three workload estimating techniques are proposed. The first, an information theoretic analysis, is used to calculate bit values for perceptual, manual, and verbal demands in each communication task. The second, a paired-comparisons technique, obtains subjective estimates of the information processing and memory requirements for specific messages. By combining the results of the first two techniques, a hybrid analytical scale is created. The third, a subjective rank ordering of sequences of communications tasks, provides an overall scaling of communications workload. Recommendations for future research include an examination of communications-induced workload among the air crew and the development of simulation scenarios.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
Design and development of a personal alarm monitor for use by first responders
NASA Astrophysics Data System (ADS)
Ehntholt, Daniel J.; Louie, Alan S.; Marenchic, Ingrid G.; Forni, Ronald J.
2004-03-01
This paper describes the design and development of a small, portable alarm device that can be used by first responders to an emergency event to warn of the presence of low levels of a toxic nerve gas. The device consists of a rigid reusable portion and a consumable packet that is sensitive to the presence of acetylcholinesterase inhibitors such as the nerve gases Sarin or Soman. The sensitivity level of the alarm is set to be at initial physiological response at the meiosis level, orders of magnitude below lethal concentrations. The AChE enzyme used is specific for nerve-type toxins. A color development reaction is used to demonstrate continued activity of the enzyme over its twelve-hour operational cycle.
A GC-MS method for the detection and quantitation of ten major drugs of abuse in human hair samples.
Orfanidis, A; Mastrogianni, O; Koukou, A; Psarros, G; Gika, H; Theodoridis, G; Raikos, N
2017-03-15
A sensitive analytical method has been developed in order to identify and quantify major drugs of abuse (DOA), namely morphine, codeine, 6-monoacetylmorphine, cocaine, ecgonine methyl ester, benzoylecgonine, amphetamine, methamphetamine, methylenedioxymethamphetamine and methylenedioxyamphetamine in human hair. Samples of hair were extracted with methanol under ultrasonication at 50°C after a three step rinsing process to remove external contamination and dirt hair. Derivatization with BSTFA was selected in order to increase detection sensitivity of GC/MS analysis. Optimization of derivatization parameters was based on experiments for the selection of derivatization time, temperature and volume of derivatising agent. Validation of the method included evaluation of linearity which ranged from 2 to 350ng/mg of hair mean concentration for all DOA, evaluation of sensitivity, accuracy, precision and repeatability. Limits of detection ranged from 0.05 to 0.46ng/mg of hair. The developed method was applied for the analysis of hair samples obtained from three human subjects and were found positive in cocaine, and opiates. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Hemmat Esfe, Mohammad; Saedodin, Seyfolah; Rejvani, Mousa; Shahram, Jalal
2017-06-01
In the present study, rheological behavior of ZnO/10W40 nano-lubricant is investigated by an experimental approach. Firstly, ZnO nanoparticles of 10-30 nm were dispersed in 10W40 engine oil with solid volume fractions of 0.25-2%, then the viscosity of the composed nano-lubricant was measured in temperature ranges of 5-55 °C and in various shear rates. From analyzing the results, it was revealed that both of the base oil and nano-lubricants are non-Newtonian fluids which exhibit shear thinning behavior. Sensitivity of viscosity to the solid volume fraction enhancement was calculated by a new correlation which was proposed in terms of solid volume fraction and temperature. In order to attain an accurate model by which experimental data are predicted, an artificial neural network (ANN) with a hidden layer and 5 neurons was designed. This model was considerably accurate in predicting experimental data of dynamic viscosity as R-squared and average absolute relative deviation (AARD %) were respectively 0.9999 and 0.0502.
Individualistic and Time-Varying Tree-Ring Growth to Climate Sensitivity
Carrer, Marco
2011-01-01
The development of dendrochronological time series in order to analyze climate-growth relationships usually involves first a rigorous selection of trees and then the computation of the mean tree-growth measurement series. This study suggests a change in the perspective, passing from an analysis of climate-growth relationships that typically focuses on the mean response of a species to investigating the whole range of individual responses among sample trees. Results highlight that this new approach, tested on a larch and stone pine tree-ring dataset, outperforms, in terms of information obtained, the classical one, with significant improvements regarding the strength, distribution and time-variability of the individual tree-ring growth response to climate. Moreover, a significant change over time of the tree sensitivity to climatic variability has been detected. Accordingly, the best-responder trees at any one time may not always have been the best-responders and may not continue to be so. With minor adjustments to current dendroecological protocol and adopting an individualistic approach, we can improve the quality and reliability of the ecological inferences derived from the climate-growth relationships. PMID:21829523
Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2010-01-01
This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.
Ebesutani, Chad; Kim, Mirihae; Park, Hee-Hoon
2016-08-01
The present study was the first to examine the applicability of the bifactor structure underlying the Anxiety Sensitivity Index-3 (ASI-3) in an East Asian (South Korean) sample and to determine which factors in the bifactor model were significantly associated with anxiety, depression, and negative affect. Using a sample of 289 South Korean university students, we compared (a) the original 3-factor AS model, (b) a 3-group bifactor AS model, and (c) a 2-group bifactor AS model (with only the physical and social concern group factors present). Results revealed that the 2-group bifactor AS model fit the ASI-3 data the best. Relatedly, although all ASI-3 items loaded on the general AS factor, the Cognitive Concern group factor was not defined in the bifactor model and may therefore need to be omitted in order to accurately model AS when conducting factor analysis and structural equation modeling (SEM) in cross cultural contexts. SEM results also revealed that the general AS factor was the only factor from the 2-group bifactor model that significantly predicted anxiety, depression, and negative affect. Implications and importance of this new bifactor structure of Anxiety Sensitivity in East Asian samples are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
dos Anjos, Nislanha Ana; Schulze, Tobias; Brack, Werner; Val, Adalberto Luis; Schirmer, Kristin; Scholz, Stefan
2011-05-01
In order to monitor potential contamination deriving from exploration and transport of oil in the Urucu region (Brazil), there is a need to establish suitable biomarkers for native Amazonian fish. Therefore, the transcript expression of various potentially sensitive genes (ahr2(1), cyp1a, hmox1, hsp70, maft, mt, nfe212, gstp1 and nqo1) in fish exposed to water soluble fractions of oil (WSF) was compared. The analysis was first performed in an established laboratory model, the zebrafish embryo. The cyp1a gene proved to be the most sensitive and robust marker for oil contamination and, hence, was selected to study the effect of oil-derived contaminants in the Amazonian cichlid Astronotus ocellatus. Induction of cyp1a transcript expression was observed for ≥0.0061% (v/v) WSFs. In liver samples of fish, collected from different lakes in the Urucu oil mining area, no elevated expression of cyp1a transcripts was observed. The data demonstrate the high sensitivity of cyp1a as indicator of oil exposure; further studies should be considered to test its usefulness at known contaminated sites and to evaluate influential factors by, e.g. mesocosm experiments. Copyright © 2011 Elsevier B.V. All rights reserved.
Flexible aircraft dynamic modeling for dynamic analysis and control synthesis
NASA Technical Reports Server (NTRS)
Schmidt, David K.
1989-01-01
The linearization and simplification of a nonlinear, literal model for flexible aircraft is highlighted. Areas of model fidelity that are critical if the model is to be used for control system synthesis are developed and several simplification techniques that can deliver the necessary model fidelity are discussed. These techniques include both numerical and analytical approaches. An analytical approach, based on first-order sensitivity theory is shown to lead not only to excellent numerical results, but also to closed-form analytical expressions for key system dynamic properties such as the pole/zero factors of the vehicle transfer-function matrix. The analytical results are expressed in terms of vehicle mass properties, vibrational characteristics, and rigid-body and aeroelastic stability derivatives, thus leading to the underlying causes for critical dynamic characteristics.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao Yang; Luo, Gang; Jiang, Fangming
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less
First demonstration of high-order QAM signal amplification in PPLN-based phase sensitive amplifier.
Umeki, T; Tadanaga, O; Asobe, M; Miyamoto, Y; Takenouchi, H
2014-02-10
We demonstrate the phase sensitive amplification of a high-order quadrature amplitude modulation (QAM) signal using non-degenerate parametric amplification in a periodically poled lithium niobate (PPLN) waveguide. The interaction between the pump, signal, and phase-conjugated idler enables us to amplify arbitrary phase components of the signal. The 16QAM signals are amplified without distortion because of the high gain linearity of the PPLN-based phase sensitive amplifier (PSA). Both the phase and amplitude noise reduction capabilities of the PSA are ensured. Phase noise cancellation is achieved by using the interaction with the phase-conjugated idler. A degraded signal-to-noise ratio (SNR) is restored by using the gain difference between a phase-correlated signal-idler pair and uncorrelated excess noise. The applicability of the simultaneous amplification of multi-carrier signals and the amplification of two independent polarization signals are also confirmed with a view to realizing ultra-high spectrally efficient signal amplification.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Joshi, Ashish V; Stephens, Jennifer M; Munro, Vicki; Mathew, Prasad; Botteman, Marc F
2006-01-01
To compare the cost-effectiveness of three treatment regimens using recombinant activated Factor VII (rFVIIa), NovoSeven, and activated prothrombin-complex concentrate (APCC), FEIBA VH, for home treatment of minor-to-moderate bleeds in hemophilia patients with inhibitors. A literature-based, decision-analytic model was developed to compare three treatment regimens. The regimens consisting of first-, second-, and third-line treatments were: rFVIIa-rFVIIa-rFVIIa; APCC-rFVIIa-rFVIIa; and APCC-APCC-rFVIIa. Patients not responding to first-line treatment were administered second-line treatment, and those failing second-line received third-line treatment. Using literature and expert opinion, the model structure and base-case inputs were adapted to the US from a previously published analysis. The percentage of evaluable bleeds controlled with rFVIIa and APCC were obtained from published literature. Drug costs (2005 US$) based on average wholesale price were included in the base-case model. Univariate and probabilistic sensitivity analyses (second-order Monte Carlo simulation) were conducted by varying the efficacy, re-bleeding rates, patient weight, and dosing to ascertain robustness of the model. In the base-case analysis, the average cost per resolved bleed using rFVIIa as first-, second-, and third-line treatment was $28 076. Using APCC as first-line and rFVIIa as second- and third-line treatment resulted in an average cost per resolved bleed of $30 883, whereas the regimen using APCC as first- and second-line, and rFVIIa as third-line treatment was the most expensive, with an average cost per resolved bleed of $32 150. Cost offsets occurred for the rFVIIa-only regimen through avoidance of second and third lines of treatment. In probabilistic sensitivity analyses, the rFVIIa-only strategy was the least expensive strategy more than 68% of the time. The management of minor-to-moderate bleeds extends beyond the initial line of treatment, and should include the economic impact of re-bleeding and failures over multiple lines of treatment. In the majority of cases, the rFVIIa-only regimen appears to be a less expensive treatment option in inhibitor patients with minor-to-moderate bleeds over three lines of treatment.
Imai, Hisao; Minemura, Hiroyuki; Sugiyama, Tomohide; Yamada, Yutaka; Kaira, Kyoichi; Kanazawa, Kenya; Kasai, Takashi; Kaburagi, Takayuki; Minato, Koichi
2018-05-08
Epidermal growth factor receptor-tyrosine kinase inhibitor (EGFR-TKI) is effective as first-line chemotherapy for patients with advanced non-small-cell lung cancer (NSCLC) harboring sensitive EGFR mutations. However, whether the efficacy of second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment is similar to that of first-line cytotoxic drug chemotherapy in elderly patients aged ≥ 75 years harboring sensitive EGFR mutations is unclear. Therefore, we aimed to investigate the efficacy and safety of cytotoxic drug chemotherapy after first-line EGFR-TKI treatment in elderly patients with NSCLC harboring sensitive EGFR mutations. We retrospectively evaluated the clinical effects and safety profiles of second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment in elderly patients with NSCLC harboring sensitive EGFR mutations (exon 19 deletion/exon 21 L858R mutation). Between April 2008 and December 2015, 78 elderly patients with advanced NSCLC harboring sensitive EGFR mutations received first-line EGFR-TKI at four Japanese institutions. Baseline characteristics, regimens, responses to first- and second-line treatments, whether or not patients received subsequent treatment, and if not, the reasons for non-administration were recorded. Overall, 20 patients with a median age of 79.5 years (range 75-85 years) were included in our analysis. The overall response, disease control, median progression-free survival, and overall survival rates were 15.0, 60.0%, 2.4, and 13.2 months, respectively. Common adverse events included leukopenia, neutropenia, anemia, thrombocytopenia, malaise, and anorexia. Major grade 3 or 4 toxicities included leukopenia (25.0%) and neutropenia (45.0%). No treatment-related deaths were noted. Second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment among elderly patients with NSCLC harboring sensitive EGFR mutations was effective and safe and showed equivalent outcomes to first-line cytotoxic drug chemotherapy.
Cassiday, Pamela K.; Pawloski, Lucia C.; Tatti, Kathleen M.; Martin, Monte D.; Briere, Elizabeth C.; Tondella, M. Lucia; Martin, Stacey W.
2018-01-01
Introduction The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Methods Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2–4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods—pertussis culture as the “gold standard,” composite reference standard analysis (CRS), and latent class analysis (LCA). Results Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Conclusions Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis. PMID:29652945
Sensitivity and specificity of dosing alerts for dosing errors among hospitalized pediatric patients
Stultz, Jeremy S; Porter, Kyle; Nahata, Milap C
2014-01-01
Objectives To determine the sensitivity and specificity of a dosing alert system for dosing errors and to compare the sensitivity of a proprietary system with and without institutional customization at a pediatric hospital. Methods A retrospective analysis of medication orders, orders causing dosing alerts, reported adverse drug events, and dosing errors during July, 2011 was conducted. Dosing errors with and without alerts were identified and the sensitivity of the system with and without customization was compared. Results There were 47 181 inpatient pediatric orders during the studied period; 257 dosing errors were identified (0.54%). The sensitivity of the system for identifying dosing errors was 54.1% (95% CI 47.8% to 60.3%) if customization had not occurred and increased to 60.3% (CI 54.0% to 66.3%) with customization (p=0.02). The sensitivity of the system for underdoses was 49.6% without customization and 60.3% with customization (p=0.01). Specificity of the customized system for dosing errors was 96.2% (CI 96.0% to 96.3%) with a positive predictive value of 8.0% (CI 6.8% to 9.3). All dosing errors had an alert over-ridden by the prescriber and 40.6% of dosing errors with alerts were administered to the patient. The lack of indication-specific dose ranges was the most common reason why an alert did not occur for a dosing error. Discussion Advances in dosing alert systems should aim to improve the sensitivity and positive predictive value of the system for dosing errors. Conclusions The dosing alert system had a low sensitivity and positive predictive value for dosing errors, but might have prevented dosing errors from reaching patients. Customization increased the sensitivity of the system for dosing errors. PMID:24496386
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
Knoeferle, Pia; Crocker, Matthew W
2009-12-01
Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donnelly, H.; Fullwood, R.; Glancy, J.
This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
NASA Astrophysics Data System (ADS)
Patel, Dhananjay; Singh, Vinay Kumar; Dalal, U. D.
2016-07-01
This work addresses the analytical and numerical investigations of the transmission performance of an optical Single Sideband (SSB) modulation technique generated by a Mach Zehnder Modulator (MZM) with a 90° and 120° hybrid coupler. It takes into account the problem of chromatic dispersion in single mode fibers in Passive Optical Networks (PON), which severely degrades the performance of the system. Considering the transmission length of the fiber, the SSB modulation generated by maintaining a phase shift of π/2 between the two electrodes of the MZM provides better receiver sensitivity. However, the power of higher-order harmonics generated due to the nonlinearity of the MZM is directly proportional to the modulation index, making the SSB look like a quasi-double sideband (DSB) and causing power fading due to chromatic dispersion. To eliminate one of the second-order harmonics, the SSB signal based on an MZM with a 120° hybrid coupler is simulated. An analytical model of conventional SSB using 90° and 120° hybrid couplers is established. The latter suppresses unwanted (upper/lower) first-order and second-order (lower/upper) sidebands. For the analysis, a varying quadrature amplitude modulation (QAM) Orthogonal Frequency Division Multiplexing (OFDM) signal with a data rate of 5 Gb/s is upconverted using both of the SSB techniques and is transmitted over a distance of 75 km in Single Mode Fiber (SMF). The simulation results show that the SSB with 120° hybrid coupler proves to be more immune to chromatic dispersion as compared to the conventional SSB technique. This is in tandem with the theoretical analysis presented in the article.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements
Anthony, Robert E.; Ringler, Adam; Wilson, David
2018-01-01
The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.
First principles DFT study of dye-sensitized CdS quantum dots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Kalpna; Singh, Kh. S.; Kishor, Shyam, E-mail: shyam387@gmail.com
2014-04-24
Dye-sensitized quantum dots (QDs) are considered promising candidates for dye-sensitized solar cells. In order to maximize their efficiency, detailed theoretical studies are important. Here, we report a first principles density functional theory (DFT) investigation of experimentally realized dye - sensitized QD / ligand systems, viz., Cd{sub 16}S{sub 16}, capped with acetate molecules and a coumarin dye. The hybrid B3LYP functional and a 6−311+G(d,p)/LANL2dz basis set are used to study the geometric, energetic and electronic properties of these clusters. There is significant structural rearrangement in all the clusters studied - on the surface for the bare QD, and in the positionsmore » of the acetate / dye ligands for the ligated QDs. The density of states (DOS) of the bare QD shows states in the band gap, which disappear on surface passivation with the acetate molecules. Interestingly, in the dye-sensitised QD, the HOMO is found to be localized mainly on the dye molecule, while the LUMO is on the QD, as required for photo-induced electron injection from the dye to the QD.« less
The trait of sensory processing sensitivity and neural responses to changes in visual scenes
Xu, Xiaomeng; Aron, Arthur; Aron, Elaine; Cao, Guikang; Feng, Tingyong; Weng, Xuchu
2011-01-01
This exploratory study examined the extent to which individual differences in sensory processing sensitivity (SPS), a temperament/personality trait characterized by social, emotional and physical sensitivity, are associated with neural response in visual areas in response to subtle changes in visual scenes. Sixteen participants completed the Highly Sensitive Person questionnaire, a standard measure of SPS. Subsequently, they were tested on a change detection task while undergoing functional magnetic resonance imaging (fMRI). SPS was associated with significantly greater activation in brain areas involved in high-order visual processing (i.e. right claustrum, left occipitotemporal, bilateral temporal and medial and posterior parietal regions) as well as in the right cerebellum, when detecting minor (vs major) changes in stimuli. These findings remained strong and significant after controlling for neuroticism and introversion, traits that are often correlated with SPS. These results provide the first evidence of neural differences associated with SPS, the first direct support for the sensory aspect of this trait that has been studied primarily for its social and affective implications, and preliminary evidence for heightened sensory processing in individuals high in SPS. PMID:20203139
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
Párta, László; Zalai, Dénes; Borbély, Sándor; Putics, Akos
2014-02-01
The application of dielectric spectroscopy was frequently investigated as an on-line cell culture monitoring tool; however, it still requires supportive data and experience in order to become a robust technique. In this study, dielectric spectroscopy was used to predict viable cell density (VCD) at industrially relevant high levels in concentrated fed-batch culture of Chinese hamster ovary cells producing a monoclonal antibody for pharmaceutical purposes. For on-line dielectric spectroscopy measurements, capacitance was scanned within a wide range of frequency values (100-19,490 kHz) in six parallel cell cultivation batches. Prior to detailed mathematical analysis of the collected data, principal component analysis (PCA) was applied to compare dielectric behavior of the cultivations. PCA analysis resulted in detecting measurement disturbances. By using the measured spectroscopic data, partial least squares regression (PLS), Cole-Cole, and linear modeling were applied and compared in order to predict VCD. The Cole-Cole and the PLS model provided reliable prediction over the entire cultivation including both the early and decline phases of cell growth, while the linear model failed to estimate VCD in the later, declining cultivation phase. In regards to the measurement error sensitivity, remarkable differences were shown among PLS, Cole-Cole, and linear modeling. VCD prediction accuracy could be improved in the runs with measurement disturbances by first derivative pre-treatment in PLS and by parameter optimization of the Cole-Cole modeling.
NASA Astrophysics Data System (ADS)
Saha, Ardhendu; Datta, Arijit; Kaman, Surjit
2018-03-01
A proposal toward the enhancement in the sensitivity of a multimode interference-based fiber optic liquid-level sensor is explored analytically using a zero-order Bessel-Gauss (BG) beam as the input source. The sensor head consists of a suitable length of no-core fiber (NCF) sandwiched between two specialty high-order mode fibers. The coupling efficiency of various order modes inside the sensor structure is assessed using guided-mode propagation analysis and the performance of the proposed sensor has been benchmarked against the conventional sensor using a Gaussian beam. Furthermore, the study has been corroborated using a finite-difference beam propagation method in Lumerical's Mode Solutions software to investigate the propagation of the zero-order BG beam inside the sensor structure. Based on the simulation outcomes, the proposed scheme yields a maximum absolute sensitivity of up to 3.551 dB / mm and a sensing resolution of 2.816 × 10 - 3 mm through the choice of an appropriate length of NCF at an operating wavelength of 1.55 μm. Owing to this superior sensing performance, the reported sensing technology expedites an avenue to devise a high-performance fiber optic-level sensor that finds profound implication in different physical, biological, and chemical sensing purposes.
NASA Astrophysics Data System (ADS)
Lebedev, Sergei; Adam, Joanne; Meier, Thomas
2013-04-01
Seismic surface waves have been used to study the Earth's crust since the early days of modern seismology. In the last decade, surface-wave crustal imaging has been rejuvenated by the emergence of new, array techniques (ambient-noise and teleseismic interferometry). The strong sensitivity of both Rayleigh and Love waves to the Moho is evident from a mere visual inspection of their dispersion curves or waveforms. Yet, strong trade-offs between the Moho depth and crustal and mantle structure in surface-wave inversions have prompted doubts regarding their capacity to resolve the Moho. Although the Moho depth has been an inversion parameter in numerous surface-wave studies, the resolution of Moho properties yielded by a surface-wave inversion is still somewhat uncertain and controversial. We use model-space mapping in order to elucidate surface waves' sensitivity to the Moho depth and the resolution of their inversion for it. If seismic wavespeeds within the crust and upper mantle are known, then Moho-depth variations of a few kilometres produce large (over 1 per cent) perturbations in phase velocities. However, in inversions of surface-wave data with no a priori information (wavespeeds not known), strong Moho-depth/shear-speed trade-offs will mask about 90 per cent of the Moho-depth signal, with remaining phase-velocity perturbations 0.1-0.2 per cent only. In order to resolve the Moho with surface waves alone, errors in the data must thus be small (up to 0.2 per cent for resolving continental Moho). If the errors are larger, Moho-depth resolution is not warranted and depends on error distribution with period, with errors that persist over broad period ranges particularly damaging. An effective strategy for the inversion of surface-wave data alone for the Moho depth is to, first, constrain the crustal and upper-mantle structure by inversion in a broad period range and then determine the Moho depth in inversion in a narrow period range most sensitive to it, with the first-step results used as reference. We illustrate this strategy with an application to data from the Kaapvaal Craton. Prior information on crustal and mantle structure reduces the trade-offs and thus enables resolving the Moho depth with noisier data; such information should be sought and used whenever available (as has been done, explicitly or implicitly, in many previous studies). Joint analysis or inversion of surface-wave and other data (receiver functions, topography, gravity) can reduce uncertainties further and facilitate Moho mapping. Alone or as a part of multi-disciplinary datasets, surface-wave data offer unique sensitivity to the crustal and upper-mantle structure and are becoming increasingly important in the seismic imaging of the crust and the Moho. Reference Lebedev, S., J. Adam, T. Meier. Mapping the Moho with seismic surface waves: A review, resolution analysis, and recommended inversion strategies. Tectonophysics, "Moho" special issue, 10.1016/j.tecto.2012.12.030, 2013.
Practitioners' Perspectives on Cultural Sensitivity in Latina/o Teen Pregnancy Prevention
ERIC Educational Resources Information Center
Wilkinson-Lee, Ada M.; Russell, Stephen T.; Lee, Faye C. H.
2006-01-01
This study examined practitioners' understandings of cultural sensitivity in the context of pregnancy prevention programs for Latina teens. Fifty-eight practitioners from teen pregnancy prevention programs in California were interviewed in a guided conversation format. Three themes emerged in our analysis. First, practitioners' definitions of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey
Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar
Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet
2010-10-24
Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.
Near-Earth Object Astrometric Interferometry
NASA Technical Reports Server (NTRS)
Werner, Martin R.
2005-01-01
Using astrometric interferometry on near-Earth objects (NEOs) poses many interesting and difficult challenges. Poor reflectance properties and potentially no significant active emissions lead to NEOs having intrinsically low visual magnitudes. Using worst case estimates for signal reflection properties leads to NEOs having visual magnitudes of 27 and higher. Today the most sensitive interferometers in operation have limiting magnitudes of 20 or less. The main reason for this limit is due to the atmosphere, where turbulence affects the light coming from the target, limiting the sensitivity of the interferometer. In this analysis, the interferometer designs assume no atmosphere, meaning they would be placed at a location somewhere in space. Interferometer configurations and operational uncertainties are looked at in order to parameterize the requirements necessary to achieve measurements of low visual magnitude NEOs. This analysis provides a preliminary estimate of what will be required in order to take high resolution measurements of these objects using interferometry techniques.
Efficient computation paths for the systematic analysis of sensitivities
NASA Astrophysics Data System (ADS)
Greppi, Paolo; Arato, Elisabetta
2013-01-01
A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.
Interpreting the results of chemical stone analysis in the era of modern stone analysis techniques
Gilad, Ron; Williams, James C.; Usman, Kalba D.; Holland, Ronen; Golan, Shay; Ruth, Tor; Lifshitz, David
2017-01-01
Introduction and Objective Stone analysis should be performed in all first-time stone formers. The preferred analytical procedures are Fourier-transform infrared spectroscopy (FT-IR) or X-ray diffraction (XRD). However, due to limited resources, chemical analysis (CA) is still in use throughout the world. The aim of the study was to compare FT-IR and CA in well matched stone specimens and characterize the pros and cons of CA. Methods In a prospective bi-center study, urinary stones were retrieved from 60 consecutive endoscopic procedures. In order to assure that identical stone samples were sent for analyses, the samples were analyzed initially by micro-computed tomography to assess uniformity of each specimen before submitted for FTIR and CA. Results Overall, the results of CA did not match with the FTIR results in 56% of the cases. In 16% of the cases CA missed the major stone component and in 40% the minor stone component. 37 of the 60 specimens contained CaOx as major component by FTIR, and CA reported major CaOx in 47/60, resulting in high sensitivity, but very poor specificity. CA was relatively accurate for UA and cystine. CA missed struvite and calcium phosphate as a major component in all cases. In mixed stones the sensitivity of CA for the minor component was poor, generally less than 50%. Conclusions Urinary stone analysis using CA provides only limited data that should be interpreted carefully. Urinary stone analysis using CA is likely to result in clinically significant errors in its assessment of stone composition. Although the monetary costs of CA are relatively modest, this method does not provide the level of analytical specificity required for proper management of patients with metabolic stones. PMID:26956131
NASA Astrophysics Data System (ADS)
Kala, Zdeněk; Kala, Jiří
2011-09-01
The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.
Sensitivity towards fear of electric shock in passive threat situations.
Ring, Patrick; Kaernbach, Christian
2015-01-01
Human judgment and decision-making (JDM) requires an assessment of different choice options. While traditional theories of choice argue that cognitive processes are the main driver to reach a decision, growing evidence highlights the importance of emotion in decision-making. Following these findings, it appears relevant to understand how individuals asses the attractiveness or riskiness of a situation in terms of emotional processes. The following study aims at a better understanding of the psychophysiological mechanisms underlying threat sensitivity by measuring skin conductance responses (SCRs) in passive threat situations. While previous studies demonstrate the role of magnitude on emotional body reactions preceding an outcome, this study focuses on probability. In order to analyze emotional body reactions preceding negative events with varying probability of occurrence, we have our participants play a two-stage card game. The first stage of the card game reveals the probability of receiving an unpleasant electric shock. The second stage applies the electric shock with the previously announced probability. For the analysis, we focus on the time interval between the first and second stage. We observe a linear relation between SCRs in anticipation of receiving an electric shock and shock probability. This finding indicates that SCRs are able to code the likelihood of negative events. We outline how this coding function of SCRs during the anticipation of negative events might add to an understanding of human JDM.
SKA weak lensing- II. Simulated performance and survey design considerations
NASA Astrophysics Data System (ADS)
Bonaldi, Anna; Harrison, Ian; Camera, Stefano; Brown, Michael L.
2016-12-01
We construct a pipeline for simulating weak lensing cosmology surveys with the Square Kilometre Array (SKA), taking as inputs telescope sensitivity curves; correlated source flux, size and redshift distributions; a simple ionospheric model; source redshift and ellipticity measurement errors. We then use this simulation pipeline to optimize a 2-yr weak lensing survey performed with the first deployment of the SKA (SKA1). Our assessments are based on the total signal to noise of the recovered shear power spectra, a metric that we find to correlate very well with a standard dark energy figure of merit. We first consider the choice of frequency band, trading off increases in number counts at lower frequencies against poorer resolution; our analysis strongly prefers the higher frequency Band 2 (950-1760 MHz) channel of the SKA-MID telescope to the lower frequency Band 1 (350-1050 MHz). Best results would be obtained by allowing the centre of Band 2 to shift towards lower frequency, around 1.1 GHz. We then move on to consider survey size, finding that an area of 5000 deg2 is optimal for most SKA1 instrumental configurations. Finally, we forecast the performance of a weak lensing survey with the second deployment of the SKA. The increased survey size (3π steradian) and sensitivity improves both the signal to noise and the dark energy metrics by two orders of magnitude.
Analysis of higher order harmonics with holographic reflection gratings
NASA Astrophysics Data System (ADS)
Mas-Abellan, P.; Madrigal, R.; Fimia, A.
2017-05-01
Silver halide emulsions have been considered one of the most energetic sensitive materials for holographic applications. Nonlinear recording effects on holographic reflection gratings recorded on silver halide emulsions have been studied by different authors obtaining excellent experimental results. In this communication specifically we focused our investigation on the effects of refractive index modulation, trying to get high levels of overmodulation that will produce high order harmonics. We studied the influence of the overmodulation and its effects on the transmission spectra for a wide exposure range by use of 9 μm thickness films of ultrafine grain emulsion BB640, exposed to single collimated beams using a red He-Ne laser (wavelength 632.8 nm) with Denisyuk configuration obtaining a spatial frequency of 4990 l/mm recorded on the emulsion. The experimental results show that high overmodulation levels of refractive index produce second order harmonics with high diffraction efficiency (higher than 75%) and a narrow grating bandwidth (12.5 nm). Results also show that overmodulation produce diffraction spectra deformation of the second order harmonic, transforming the spectrum from sinusoidal to approximation of square shape due to very high overmodulation. Increasing the levels of overmodulation of refractive index, we have obtained higher order harmonics, obtaining third order harmonic with diffraction efficiency (up to 23%) and narrowing grating bandwidth (5 nm). This study is the first step to develop a new easy technique to obtain narrow spectral filters based on the use of high index modulation reflection gratings.
Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun
Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less
A model to estimate insulin sensitivity in dairy cows.
Holtenius, Paul; Holtenius, Kjell
2007-10-11
Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI) is based on plasma concentrations of glucose, insulin and free fatty acids (FFA) and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Lin, Liping; Zhao, Juanjuan; Hu, Jiazhu; Huang, Fuxi; Han, Jianjun; He, Yan; Cao, Xiaolong
2018-01-01
Purpose The aim of this study is to evaluate the impact of weight loss at presentation on treatment outcomes of first-line EGFR-tyrosine kinase inhibitors (EGFR-TKI) in EGFR-TKI sensitive mutant NSCLC patients. Methods We retrospectively analyzed the clinical outcomes of 75 consecutive advanced NSCLC patients with EGFR-TKI sensitive mutations (exon 19 deletion or exon 21 L858R) received first-line gefitinib or erlotinib therapy according to weight loss status at presentation in our single center. Results Of 75 EGFR-TKI sensitive mutant NSCLC patients, 49 (65.3%) patients had no weight loss and 26 (34.7%) had weight loss at presentation, the objective response rate (ORR) to EGFR-TKI treatment were similar between the two groups (79.6% vs. 76.9%, p = 0.533). Patients without weight loss at presentation had significantly longer median progression free survival (PFS) (12.4 months vs. 7.6 months; hazard ratio [HR] 0.356, 95% confidence interval [CI] 0.212-0.596, p < 0.001) and overall survival (OS) (28.5 months vs. 20.7 months; HR 0.408, 95% CI 0.215-0.776, p = 0.006) than those with weight loss at presentation; moreover, the stratified analysis by EGFR-TKI sensitive mutation types also found similar trend between these two groups except for OS in EGFR exon 21 L858R mutation patients. Multivariate analysis identified weight loss at presentation and EGFR-TKI sensitive mutation types were independent predictive factors for PFS and OS. Conclusions Weight loss at presentation had a detrimental impact on PFS and OS in EGFR-TKI sensitive mutant advanced NSCLC patients treated with first-line EGFR-TKI. It should be considered as an important factor in the treatment decision or designing of EGFR-TKI clinical trials.
Barbot, Antoine; Landy, Michael S.; Carrasco, Marisa
2012-01-01
The visual system can use a rich variety of contours to segment visual scenes into distinct perceptually coherent regions. However, successfully segmenting an image is a computationally expensive process. Previously we have shown that exogenous attention—the more automatic, stimulus-driven component of spatial attention—helps extract contours by enhancing contrast sensitivity for second-order, texture-defined patterns at the attended location, while reducing sensitivity at unattended locations, relative to a neutral condition. Interestingly, the effects of exogenous attention depended on the second-order spatial frequency of the stimulus. At parafoveal locations, attention enhanced second-order contrast sensitivity to relatively high, but not to low second-order spatial frequencies. In the present study we investigated whether endogenous attention—the more voluntary, conceptually-driven component of spatial attention—affects second-order contrast sensitivity, and if so, whether its effects are similar to those of exogenous attention. To that end, we compared the effects of exogenous and endogenous attention on the sensitivity to second-order, orientation-defined, texture patterns of either high or low second-order spatial frequencies. The results show that, like exogenous attention, endogenous attention enhances second-order contrast sensitivity at the attended location and reduces it at unattended locations. However, whereas the effects of exogenous attention are a function of the second-order spatial frequency content, endogenous attention affected second-order contrast sensitivity independent of the second-order spatial frequency content. This finding supports the notion that both exogenous and endogenous attention can affect second-order contrast sensitivity, but that endogenous attention is more flexible, benefitting performance under different conditions. PMID:22895879
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Optical diagnosis of cervical cancer by higher order spectra and boosting
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2017-03-01
In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.
NASA Astrophysics Data System (ADS)
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.; Sirk, Martin; Muirhead, Philip S.; Muterspaugh, Matthew W.; Lloyd, James P.; Ishikawa, Yuzo; McDonald, Eliza A.; Shourt, William V.; Vanderburg, Andrew M.
2016-04-01
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the "TEDI" interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced by the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels-EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. A section on theoretical photon limited sensitivity is in a companion paper, part 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the “TEDI” interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced bymore » the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels—EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. Lastly, a section on theoretical photon limited sensitivity is in a companion paper, part 2.« less
Erskine, David J.; Edelstein, Jerry; Wishnow, Edward H.; ...
2016-05-27
High-resolution broadband spectroscopy at near-infrared wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar. Observations of stars were performed with the “TEDI” interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec near-infrared echelle spectrograph. These are the first multidelay EDI demonstrations on starlight, as earlier measurements used a single delay or laboratory sources. We demonstrate very high (10×) resolution boost, from original 2700 to 27,000 with current set of delays (up to 3 cm), well beyond the classical limits enforced bymore » the slit width and detector pixel Nyquist limit. Significantly, the EDI used with multiple delays rather than a single delay as used previously yields an order of magnitude or more improvement in the stability against native spectrograph point spread function (PSF) drifts along the dispersion direction. We observe a dramatic (20×) reduction in sensitivity to PSF shift using our standard processing. A recently realized method of further reducing the PSF shift sensitivity to zero is described theoretically and demonstrated in a simple simulation which produces a 350× times reduction. We demonstrate superb rejection of fixed pattern noise due to bad detector pixels—EDI only responds to changes in pixel intensity synchronous to applied dithering. This part 1 describes data analysis, results, and instrument noise. Lastly, a section on theoretical photon limited sensitivity is in a companion paper, part 2.« less
Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles
NASA Astrophysics Data System (ADS)
Kaur, Ramanpreet; Sharma, Sapna
2018-06-01
Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.
Pohlit, Merlin; Eibisch, Paul; Akbari, Maryam; Porrati, Fabrizio; Huth, Michael; Müller, Jens
2016-11-01
Alongside the development of artificially created magnetic nanostructures, micro-Hall magnetometry has proven to be a versatile tool to obtain high-resolution hysteresis loop data and access dynamical properties. Here we explore the application of First Order Reversal Curves (FORC)-a technique well-established in the field of paleomagnetism for studying grain-size and interaction effects in magnetic rocks-to individual and dipolar-coupled arrays of magnetic nanostructures using micro-Hall sensors. A proof-of-principle experiment performed on a macroscopic piece of a floppy disk as a reference sample well known in the literature demonstrates that the FORC diagrams obtained by magnetic stray field measurements using home-built magnetometers are in good agreement with magnetization data obtained by a commercial vibrating sample magnetometer. We discuss in detail the FORC diagrams and their interpretation of three different representative magnetic systems, prepared by the direct-write Focused Electron Beam Induced Deposition (FEBID) technique: (1) an isolated Co-nanoisland showing a simple square-shaped hysteresis loop, (2) a more complex CoFe-alloy nanoisland exhibiting a wasp-waist-type hysteresis, and (3) a cluster of interacting Co-nanoislands. Our findings reveal that the combination of FORC and micro-Hall magnetometry is a promising tool to investigate complex magnetization reversal processes within individual or small ensembles of nanomagnets grown by FEBID or other fabrication methods. The method provides sub-μm spatial resolution and bridges the gap of FORC analysis, commonly used for studying macroscopic samples and rather large arrays, to studies of small ensembles of interacting nanoparticles with the high moment sensitivity inherent to micro-Hall magnetometry.
Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2014-04-01
The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
NASA Astrophysics Data System (ADS)
Sakai, Takamasa; Kohno, Motohiro; Hirae, Sadao; Nakatani, Ikuyoshi; Kusuda, Tatsufumi
1993-09-01
In this paper, we discussed a novel approach to semiconductor surface inspection, which is analysis using the C--V curve measured in a noncontact method by the metal-air-semiconductor (MAIS) technique. A new gap sensing method using the so-called Goos-Haenchen effect was developed to achieve the noncontact C--V measurement. The MAIS technique exhibited comparable sensitivity and repeatability to those of conventional C--V measurement, and hence, good reproducibility and resolution for quantifying the electrically active impurity on the order of 1× 109/cm2, which is better than most spectrometric techniques, such as secondary ion mass spectroscopy (SIMS), electron spectroscopy for chemical analysis (ESCA) and Auger electron spectrocopy (AES) which are time-consuming and destructive. This measurement without preparation of any electrical contact metal electrode suggested, for the first time, the possibility of measuring an intrinsic characteristic of the semiconductor surface, using the examples of a concrete examination.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
Combined Use of Integral Experiments and Covariance Data
NASA Astrophysics Data System (ADS)
Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.
2014-04-01
In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
Locality and Word Order in Active Dependency Formation in Bangla
Chacón, Dustin A.; Imtiaz, Mashrur; Dasgupta, Shirsho; Murshed, Sikder M.; Dan, Mina; Phillips, Colin
2016-01-01
Research on filler-gap dependencies has revealed that there are constraints on possible gap sites, and that real-time sentence processing is sensitive to these constraints. This work has shown that comprehenders have preferences for potential gap sites, and immediately detect when these preferences are not met. However, neither the mechanisms that select preferred gap sites nor the mechanisms used to detect whether these preferences are met are well-understood. In this paper, we report on three experiments in Bangla, a language in which gaps may occur in either a pre-verbal embedded clause or a post-verbal embedded clause. This word order variation allows us to manipulate whether the first gap linearly available is contained in the same clause as the filler, which allows us to dissociate structural locality from linear locality. In Experiment 1, an untimed ambiguity resolution task, we found a global bias to resolve a filler-gap dependency with the first gap linearly available, regardless of structural hierarchy. In Experiments 2 and 3, which use the filled-gap paradigm, we found sensitivity to disruption only when the blocked gap site is both structurally and linearly local, i.e., the filler and the gap site are contained in the same clause. This suggests that comprehenders may not show sensitivity to the disruption of all preferred gap resolutions. PMID:27610090
Rodrigues, Domingos M C; Lopes, Rafaela N; Franco, Marcos A R; Werneck, Marcelo M; Allil, Regina C S B
2017-12-19
Conventional pathogen detection methods require trained personnel, specialized laboratories and can take days to provide a result. Thus, portable biosensors with rapid detection response are vital for the current needs for in-loco quality assays. In this work the authors analyze the characteristics of an immunosensor based on the evanescent field in plastic optical fibers with macro curvature by comparing experimental with simulated results. The work studies different shapes of evanescent-wave based fiber optic sensors, adopting a computational modeling to evaluate the probes with the best sensitivity. The simulation showed that for a U-Shaped sensor, the best results can be achieved with a sensor of 980 µm diameter by 5.0 mm in curvature for refractive index sensing, whereas the meander-shaped sensor with 250 μm in diameter with radius of curvature of 1.5 mm, showed better sensitivity for either bacteria and refractive index (RI) sensing. Then, an immunosensor was developed, firstly to measure refractive index and after that, functionalized to detect Escherichia coli . Based on the results with the simulation, we conducted studies with a real sensor for RI measurements and for Escherichia coli detection aiming to establish the best diameter and curvature radius in order to obtain an optimized sensor. On comparing the experimental results with predictions made from the modelling, good agreements were obtained. The simulations performed allowed the evaluation of new geometric configurations of biosensors that can be easily constructed and that promise improved sensitivity.
Pinjari, Rahul V; Delcey, Mickaël G; Guo, Meiyuan; Odelius, Michael; Lundberg, Marcus
2016-02-15
The restricted active-space (RAS) approach can accurately simulate metal L-edge X-ray absorption spectra of first-row transition metal complexes without the use of any fitting parameters. These characteristics provide a unique capability to identify unknown chemical species and to analyze their electronic structure. To find the best balance between cost and accuracy, the sensitivity of the simulated spectra with respect to the method variables has been tested for two models, [FeCl6 ](3-) and [Fe(CN)6 ](3-) . For these systems, the reference calculations give deviations, when compared with experiment, of ≤1 eV in peak positions, ≤30% for the relative intensity of major peaks, and ≤50% for minor peaks. When compared with these deviations, the simulated spectra are sensitive to the number of final states, the inclusion of dynamical correlation, and the ionization potential electron affinity shift, in addition to the selection of the active space. The spectra are less sensitive to the quality of the basis set and even a double-ζ basis gives reasonable results. The inclusion of dynamical correlation through second-order perturbation theory can be done efficiently using the state-specific formalism without correlating the core orbitals. Although these observations are not directly transferable to other systems, they can, together with a cost analysis, aid in the design of RAS models and help to extend the use of this powerful approach to a wider range of transition metal systems. © 2015 Wiley Periodicals, Inc.
Sekundo, Walter; Gertnere, Jana; Bertelmann, Thomas; Solomatin, Igor
2014-05-01
To report one year results of the first cohort of routine refractive lenticule extraction through a small incision (ReLEx SMILE) for correction of myopia and myopic astigmatism. Fifty-four eyes of 27 patients with spherical equivalent of -4.68 ± 1.29D who underwent routine ReLEx SMILE by a single surgeon were prospectively followed-up for 1 year. We used the VisuMax femtosecond laser system (Carl Zeiss Meditec AG, Germany) with a 500 kHz repetition rate. Folow-up intervals were at 1 day, 1 week, 1, 3, 6, and 12 months after surgery. We obtained following parameters: uncorrected (UDVA) and distance-corrected visual acuity (CDVA), contrast sensitivity, and wave front measurements. We also recorded all complications. Because of suction loss in one eye, 12-month results were obtained in 53 eyes as follows. After 1 year, 88% of eyes with plano target had an UDVA of 20/20 or better. Twelve percent of eyes lost 1 line of CDVA, while 31% gained 1 line and 3% gained 2 lines. The mean SE after 1 year was -0.19 ± 0.19. The mean refraction change between month 1 and 12 was 0.08 D. Neither mesopic nor photopic contrast sensitivity showed any significant changes. The high-order aberrations (HOA) increased from 0.17 to 0.27 μm (Malacara notation). No visually threatening complications were observed. In this first cohort, ReLEx SMILE produced satisfactory refractive outcomes with moderate induction of HOA and unaffected contrast sensitivity after 1 year.
Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model
ERIC Educational Resources Information Center
Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum
2009-01-01
Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…
Upgrade for Phase II of the Gerda experiment
NASA Astrophysics Data System (ADS)
Agostini, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hiller, R.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Ioannucci, L.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kermaïdic, Y.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Nisi, S.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Ransom, C.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schütz, A.-K.; Schulz, O.; Schwingenheuer, B.; Selivanenko, O.; Shevchik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zsigmond, A. J.; Zuber, K.; Zuzel, G.
2018-05-01
The Gerda collaboration is performing a sensitive search for neutrinoless double beta decay of ^{76}Ge at the INFN Laboratori Nazionali del Gran Sasso, Italy. The upgrade of the Gerda experiment from Phase I to Phase II has been concluded in December 2015. The first Phase II data release shows that the goal to suppress the background by one order of magnitude compared to Phase I has been achieved. Gerda is thus the first experiment that will remain "background-free" up to its design exposure (100 kg year). It will reach thereby a half-life sensitivity of more than 10^{26} year within 3 years of data collection. This paper describes in detail the modifications and improvements of the experimental setup for Phase II and discusses the performance of individual detector components.
Critical ignition conditions in exothermically reacting systems: first-order reactions
NASA Astrophysics Data System (ADS)
Filimonov, Valeriy Yu.
2017-10-01
In this paper, the comparative analysis of the thermal explosion (TE) critical conditions on the planes temperature-conversion degree and temperature-time was conducted. It was established that the ignition criteria are almost identical only at relatively small values of Todes parameter. Otherwise, the results of critical conditions analysis on the plane temperature-conversion degree may be wrong. The asymptotic method of critical conditions calculation for the first-order reactions was proposed (taking into account the reactant consumption). The degeneration conditions of TE were determined. The calculation of critical conditions for specific first-order reaction was made. The comparison of the analytical results obtained with the results of numerical calculations and experimental data showed that they are in good agreement.
Critical ignition conditions in exothermically reacting systems: first-order reactions.
Filimonov, Valeriy Yu
2017-10-01
In this paper, the comparative analysis of the thermal explosion (TE) critical conditions on the planes temperature-conversion degree and temperature-time was conducted. It was established that the ignition criteria are almost identical only at relatively small values of Todes parameter. Otherwise, the results of critical conditions analysis on the plane temperature-conversion degree may be wrong. The asymptotic method of critical conditions calculation for the first-order reactions was proposed (taking into account the reactant consumption). The degeneration conditions of TE were determined. The calculation of critical conditions for specific first-order reaction was made. The comparison of the analytical results obtained with the results of numerical calculations and experimental data showed that they are in good agreement.
Sensitivity and response time of three common Antarctic marine copepods to metal exposure.
Zamora, Lara Marcus; King, Catherine K; Payne, Sarah J; Virtue, Patti
2015-02-01
Understanding the sensitivity of Antarctic marine organisms to metals is essential in order to manage environmental contamination risks. To date toxicity studies conducted on Antarctic marine species are limited. This study is the first to examine the acute effects of copper and cadmium on three common coastal Antarctic copepods: the calanoids Paralabidocera antarctica and Stephos longipes, and the cyclopoid Oncaea curvata. These copepods responded slowly to metal exposure (4-7d) emphasising that the exposure period of 48-96 h commonly used in toxicity tests with temperate and tropical species is not appropriate for polar organisms. We found that a longer 7 d exposure period was the minimum duration appropriate for Antarctic copepods. Although sensitivity to metal exposure varied between species, copper was more toxic than cadmium in all three species. P.antarctica was the most sensitive with 7d LC50 values for copper and cadmium of 20 μg L(-1) and 237 μg L(-1) respectively. Sensitivities to copper were similar for both O. curvata (LC50=64 μg L(-1)) and S. longipes (LC50=56 μg L(-1)), while O. curvata was more sensitive to cadmium (LC50=901 μg L(-1)) than S. longipes (LC50=1250 μg L(-1)). In comparison to copepods from lower latitudes, Antarctic copepods were more sensitive to copper and of similar sensitivity or less sensitive to cadmium. This study highlights the need for longer exposure periods in toxicity tests with slow responding Antarctic biota in order to generate relevant sensitivity data for inclusion in site-specific environmental quality guidelines for Antarctica. Copyright © 2014 Elsevier Ltd. All rights reserved.
Millimeter wave micro-CPW integrated antenna
NASA Astrophysics Data System (ADS)
Tzuang, Ching-Kuang C.; Lin, Ching-Chyuan
1996-12-01
This paper presents the latest result of applying the microstrip's leaky mode for a millimeter-wave active integrated antenna design. In contrast to the use of the first higher-order leaky mode, the second higher-order leaky mode, the second higher-order leaky mode of even symmetry is employed in the new approach, which allows larger dimension for leaky-wave antenna design and thereby reduces its performance sensitivity to the photolithographic tolerance. The new active integrated antenna operating at frequency about 34 GHz comprises of a microstrip and a coplanar waveguide stacked on top of each other, named as the millimeter wave micro-CPW integrated antenna. The feed is through the CPW that would be connected to the active uniplanar millimeter-wave (M)MIC's. Our experimental and theoretical investigations on the new integrated antenna show good input matching characteristics for such a highly directed leaky-wave antenna with the first-pass success.
NASA Astrophysics Data System (ADS)
Elsmann, Tino; Habisreuther, Tobias; Graf, Albrecht; Rothhardt, Manfred; Bartelt, Hartmut
2013-05-01
We demonstrate the inscription of fiber Bragg gratings in single crystalline sapphire using the second harmonic of a Ti:Sa-amplified femtosecond laser system. With the laser wavelength of 400 nm first order gratings were fabricated. The interferometric inscription was performed out using the Talbot interferometer. This way, not only single gratings but also multiplexed sensor arrays were realized. For evaluating of the sensor signals an adapted multimodal interrogation setup was build up, because the sapphire fiber is an extreme multimodal air clad fiber. Due to the multimodal reflection spectrum, different peak functions have been tested to evaluate the thermal properties of the grating. The temperature sensors were tested for high temperature applications up to 1200°C with a thermal sensitivity in the order of 25 pm/K which is more than the doubled of that one reached with Bragg gratings in conventional silica fibers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Kirk W.; Oue, Mariko; Kollias, Pavlos
The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program's Southern Great Plains (SGP) site includes a heterogeneous distributed scanning Doppler radar network suitable for collecting coordinated Doppler velocity measurements in deep convective clouds. The surrounding National Weather Service (NWS) Next Generation Weather Surveillance Radar 1988 Doppler (NEXRAD WSR-88D) further supplements this network. Radar velocity measurements are assimilated in a three-dimensional variational (3DVAR) algorithm that retrieves horizontal and vertical air motions over a large analysis domain (100 km × 100 km) at storm-scale resolutions (250 m). For the first time, direct evaluation of retrieved vertical air velocities with thosemore » from collocated 915 MHz radar wind profilers is performed. Mean absolute and root-mean-square differences between the two sources are of the order of 1 and 2 m s -1, respectively, and time–height correlations are of the order of 0.5. An empirical sensitivity analysis is done to determine a range of 3DVAR constraint weights that adequately satisfy the velocity observations and anelastic mass continuity. It is shown that the vertical velocity spread over this range is of the order of 1 m s -1. The 3DVAR retrievals are also compared to those obtained from an iterative upwards integration technique. Lastly, the results suggest that the 3DVAR technique provides a robust, stable solution for cases in which integration techniques have difficulty satisfying velocity observations and mass continuity simultaneously.« less
North, Kirk W.; Oue, Mariko; Kollias, Pavlos; ...
2017-08-04
The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program's Southern Great Plains (SGP) site includes a heterogeneous distributed scanning Doppler radar network suitable for collecting coordinated Doppler velocity measurements in deep convective clouds. The surrounding National Weather Service (NWS) Next Generation Weather Surveillance Radar 1988 Doppler (NEXRAD WSR-88D) further supplements this network. Radar velocity measurements are assimilated in a three-dimensional variational (3DVAR) algorithm that retrieves horizontal and vertical air motions over a large analysis domain (100 km × 100 km) at storm-scale resolutions (250 m). For the first time, direct evaluation of retrieved vertical air velocities with thosemore » from collocated 915 MHz radar wind profilers is performed. Mean absolute and root-mean-square differences between the two sources are of the order of 1 and 2 m s -1, respectively, and time–height correlations are of the order of 0.5. An empirical sensitivity analysis is done to determine a range of 3DVAR constraint weights that adequately satisfy the velocity observations and anelastic mass continuity. It is shown that the vertical velocity spread over this range is of the order of 1 m s -1. The 3DVAR retrievals are also compared to those obtained from an iterative upwards integration technique. Lastly, the results suggest that the 3DVAR technique provides a robust, stable solution for cases in which integration techniques have difficulty satisfying velocity observations and mass continuity simultaneously.« less
Multiple-foil microabrasion package (A0023)
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.; Ashworth, D. G.; Carey, W. C.; Flavill, R. P.; Jennison, R. C.
1984-01-01
The specific scientific objectives of this experiment are to measure the spatial distribution, size, velocity, radiance, and composition of microparticles in near-Earth space. The technological objectives are to measure erosion rates resulting from microparticle impacts and to evaluate thin-foil meteor 'bumpers'. The combinations of sensitivity and reliability in this experiment will provide up to 1000 impacts per month for laboratory analysis and will extend current sensitivity limits by 5 orders of magnitude in mass.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Asymptotic symmetries and geometry on the boundary in the first order formalism
NASA Astrophysics Data System (ADS)
Korovin, Yegor
2018-03-01
Proper understanding of the geometry on the boundary of a spacetime is a critical step on the way to extending holography to spaces with non-AdS asymptotics. In general the boundary cannot be described in terms of the Riemannian geometry and the first order formalism is more appropriate as we show. We analyze the asymptotic symmetries in the first order formalism for large classes of theories on AdS, Lifshitz or flat space. In all cases the asymptotic symmetry algebra is realized on the first order variables as a gauged symmetry algebra. First order formalism geometrizes and simplifies the analysis. We apply our framework to the issue of scale versus conformal invariance in AdS/CFT and obtain new perspective on the structure of asymptotic expansions for AdS and flat spaces.
Navigation and Dispersion Analysis of the First Orion Exploration Mission
NASA Technical Reports Server (NTRS)
Zanetti, Renato; D'Souza, Christopher
2015-01-01
This paper seeks to present the Orion EM-1 Linear Covariance Analysis for the DRO mission. The delta V statistics for each maneuver are presented. Included in the memo are several sensitivity analyses: variation in the time of OTC-1 (the first outbound correction maneuver), variation in the accuracy of the trans-Lunar injection, and variation in the length of the optical navigation passes.
Sensitivity to First-Order Relations of Facial Elements in Infant Rhesus Macaques
ERIC Educational Resources Information Center
Paukner, Annika; Bower, Seth; Simpson, Elizabeth A.; Suomi, Stephen J.
2013-01-01
Faces are visually attractive to both human and nonhuman primates. Human neonates are thought to have a broad template for faces at birth and prefer face-like to non-face-like stimuli. To better compare developmental trajectories of face processing phylogenetically, here, we investigated preferences for face-like stimuli in infant rhesus macaques…
Polarization Sensitive Coherent Anti-Stokes Raman Spectroscopy of DCVJ in Doped Polymer
NASA Astrophysics Data System (ADS)
Ujj, Laszlo
2014-05-01
Coherent Raman Microscopy is an emerging technic and method to image biological samples such as living cells by recording vibrational fingerprints of molecules with high spatial resolution. The race is on to record the entire image during the shortest time possible in order to increase the time resolution of the recorded cellular events. The electronically enhanced polarization sensitive version of Coherent anti-Stokes Raman scattering is one of the method which can shorten the recording time and increase the sharpness of an image by enhancing the signal level of special molecular vibrational modes. In order to show the effectiveness of the method a model system, a highly fluorescence sample, DCVJ in a polymer matrix is investigated. Polarization sensitive resonance CARS spectra are recorded and analyzed. Vibrational signatures are extracted with model independent methods. Details of the measurements and data analysis will be presented. The author gratefully acknowledge the UWF for financial support.
Investigating and understanding fouling in a planar setup using ultrasonic methods.
Wallhäusser, E; Hussein, M A; Becker, T
2012-09-01
Fouling is an unwanted deposit on heat transfer surfaces and occurs regularly in foodstuff heat exchangers. Fouling causes high costs because cleaning of heat exchangers has to be carried out and cleaning success cannot easily be monitored. Thus, used cleaning cycles in foodstuff industry are usually too long leading to high costs. In this paper, a setup is described with which it is possible, first, to produce dairy protein fouling similar to the one found in industrial heat exchangers and, second, to detect the presence and absence of such fouling using an ultrasonic based measuring method. The developed setup resembles a planar heat exchanger in which fouling can be made and cleaned reproducible. Fouling presence, absence, and cleaning progress can be monitored by using an ultrasonic detection unit. The setup is described theoretically based on electrical and mechanical lumped circuits to derive the wave equation and the transfer function to perform a sensitivity analysis. Sensitivity analysis was done to determine influencing quantities and showed that fouling is measurable. Also, first experimental results are compared with results from sensitivity analysis.
Sensitivity analysis of Monju using ERANOS with JENDL-4.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.
2012-07-01
This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
Effects of blur and repeated testing on sensitivity estimates with frequency doubling perimetry.
Artes, Paul H; Nicolela, Marcelo T; McCormick, Terry A; LeBlanc, Raymond P; Chauhan, Balwantray C
2003-02-01
To investigate the effect of blur and repeated testing on sensitivity with frequency doubling technology (FDT) perimetry. One eye of 12 patients with glaucoma (mean deviation [MD] mean, -2.5 dB, range +0.5 to -4.3 dB) and 11 normal control subjects underwent six consecutive tests with the FDT N30 threshold program in each of two sessions. In session 1, blur was induced by trial lenses (-6.00, -3.00, 0.00, +3.00, and +6.00 D, in random order). In session 2, only the effects of repeated testing were evaluated. The MD and pattern standard deviation (PSD) indices were evaluated as functions of blur and of test order. By correcting the data of session 1 for the reduction of sensitivity with repeated testing (session 2), the effect of blur on FDT sensitivities was established, and its clinical consequences evaluated on total- and pattern-deviation probability maps. FDT sensitivities decreased with blur (by <0.5 dB/D) and with repeated testing (by approximately 2 dB between the first and sixth tests). Blur and repeated testing independently led to larger numbers of locations with significant total and pattern deviation. Sensitivity reductions were similar in normal control subjects and patients with glaucoma, at central and peripheral test locations and at locations with high and low sensitivities. However, patients with glaucoma showed larger deterioration in the total-deviation-probability maps. To optimize the performance of the device, refractive errors should be corrected and immediate retesting avoided. Further research is needed to establish the cause of sensitivity loss with repeated FDT testing.
Mori, Amani T; Ngalesoni, Frida; Norheim, Ole F; Robberstad, Bjarne
2014-09-15
Dihydroartemisinin-piperaquine (DhP) is highly recommended for the treatment of uncomplicated malaria. This study aims to compare the costs, health benefits and cost-effectiveness of DhP and artemether-lumefantrine (AL) alongside "do-nothing" as a baseline comparator in order to consider the appropriateness of DhP as a first-line anti-malarial drug for children in Tanzania. A cost-effectiveness analysis was performed using a Markov decision model, from a provider's perspective. The study used cost data from Tanzania and secondary effectiveness data from a review of articles from sub-Saharan Africa. Probabilistic sensitivity analysis was used to incorporate uncertainties in the model parameters. In addition, sensitivity analyses were used to test plausible variations of key parameters and the key assumptions were tested in scenario analyses. The model predicts that DhP is more cost-effective than AL, with an incremental cost-effectiveness ratio (ICER) of US$ 12.40 per DALY averted. This result relies on the assumption that compliance to treatment with DhP is higher than that with AL due to its relatively simple once-a-day dosage regimen. When compliance was assumed to be identical for the two drugs, AL was more cost-effective than DhP with an ICER of US$ 12.54 per DALY averted. DhP is, however, slightly more likely to be cost-effective compared to a willingness-to-pay threshold of US$ 150 per DALY averted. Dihydroartemisinin-piperaquine is a very cost-effective anti-malarial drug. The findings support its use as an alternative first-line drug for treatment of uncomplicated malaria in children in Tanzania and other sub-Saharan African countries with similar healthcare infrastructures and epidemiology of malaria.
Tidal analysis of surface currents in the Porsanger fjord in northern Norway
NASA Astrophysics Data System (ADS)
Stramska, Malgorzata; Jankowski, Andrzej; Cieszyńska, Agata
2016-04-01
In this presentation we describe surface currents in the Porsanger fjord (Porsangerfjorden) located in the European Arctic in the vicinity of the Barents Sea. Our analysis is based on data collected in the summer of 2014 using High Frequency radar system. Our interest in this fjord comes from the fact that this is a region of high climatic sensitivity. One of our long-term goals is to develop an improved understanding of the undergoing changes and interactions between this fjord and the large-scale atmospheric and oceanic conditions. In order to derive a better understanding of the ongoing changes one must first improve the knowledge about the physical processes that create the environment of the fjord. The present study is the first step in this direction. Our main objective in this presentation is to evaluate the importance of tidal forcing. Tides in the Porsanger fjord are substantial, with tidal range on the order of about 3 meters. Tidal analysis attributes to tides about 99% of variance in sea level time series recorded in Honningsvåg. The most important tidal component based on sea level data is the M2 component (amplitude of ~90 cm). The S2 and N2 components (amplitude of ~ 20 cm) also play a significant role in the semidiurnal sea level oscillations. The most important diurnal component is K1 with amplitude of about 8 cm. Tidal analysis lead us to the conclusion that the most important tidal component in observed surface currents is also the M2 component. The second most important component is the S2 component. Our results indicate that in contrast to sea level, only about 10 - 20% of variance in surface currents can be attributed to tidal currents. This means that about 80-90% of variance can be credited to wind-induced and geostrophic currents. This work was funded by the Norway Grants (NCBR contract No. 201985, project NORDFLUX). Partial support for MS comes from the Institute of Oceanology (IO PAN).
Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.
1989-01-01
The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.
NASA Astrophysics Data System (ADS)
Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui
2016-02-01
The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.
Arrindell, Willem A; Urbán, Róbert; Carrozzino, Danilo; Bech, Per; Demetrovics, Zsolt; Roozen, Hendrik G
2017-09-01
To fully understand the dimensionality of an instrument in a certain population, rival bi-factor models should be routinely examined and tested against oblique first-order and higher-order structures. The present study is among the very few studies that have carried out such a comparison in relation to the Symptom Checklist-90-R. In doing so, it utilized a sample comprising 2593 patients with substance use and impulse control disorders. The study also included a test of a one-dimensional model of general psychological distress. Oblique first-order factors were based on the original a priori 9-dimensional model advanced by Derogatis (1977); and on an 8-dimensional model proposed by Arrindell and Ettema (2003)-Agoraphobia, Anxiety, Depression, Somatization, Cognitive-performance deficits, Interpersonal sensitivity and mistrust, Acting-out hostility, and Sleep difficulties. Taking individual symptoms as input, three higher-order models were tested with at the second-order levels either (1) General psychological distress; (2) 'Panic with agoraphobia', 'Depression' and 'Extra-punitive behavior'; or (3) 'Irritable-hostile depression' and 'Panic with agoraphobia'. In line with previous studies, no support was found for the one-factor model. Bi-factor models were found to fit the dataset best relative to the oblique first-order and higher-order models. However, oblique first-order and higher-order factor models also fit the data fairly well in absolute terms. Higher-order solution (2) provided support for R.F. Krueger's empirical model of psychopathology which distinguishes between fear, distress, and externalizing factors (Krueger, 1999). The higher-order model (3), which combines externalizing and distress factors (Irritable-hostile depression), fit the data numerically equally well. Overall, findings were interpreted as supporting the hypothesis that the prevalent forms of symptomatology addressed have both important common and unique features. Proposals were made to improve the Depression subscale as its scores represent more of a very common construct as is measured with the severity (total) scale than of a specific measure that purports to measure what it should assess-symptoms of depression. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Vergnenegre, Alain; Massuti, Bartomeu; de Marinis, Filippo; Carcereny, Enric; Felip, Enriqueta; Do, Pascal; Sanchez, Jose Miguel; Paz-Arez, Luis; Chouaid, Christos; Rosell, Rafael
2016-06-01
The cost-effectiveness of first-line tyrosine kinase inhibitor therapy in epidermal growth factor receptor gene (EGFR)-mutated advanced-stage non-small cell lung cancer (NSCLC) is poorly documented. We therefore conducted a cost-effectiveness analysis of first-line treatment with erlotinib versus standard chemotherapy in European patients with advanced-stage EGFR-mutated NSCLC who were enrolled in the European Erlotinib versus Chemotherapy trial. The European Erlotinib versus Chemotherapy study was a multicenter, open-label, randomized phase III trial performed mainly in Spain, France, and Italy. We based our economic analysis on clinical data and data on resource consumption (drugs, drug administration, adverse events, and second-line treatments) collected during this trial. Utility values were derived from the literature. Incremental cost-effectiveness ratios were calculated for the first-line treatment phase and for the overall strategy from the perspective of the three participating countries. Sensitivity analyses were performed by selecting the main cost drivers. Compared with standard first-line chemotherapy, the first-line treatment with erlotinib was cost saving (€7807, €17,311, and €19,364 for Spain, Italy and France, respectively) and yielded a gain of 0.117 quality-adjusted life-years. A probabilistic sensitivity analysis indicated that, given a willingness to pay at least €90,000 for 1 quality-adjusted life-year, the probability that a strategy of first-line erlotinib would be cost-effective was 100% in France, 100% in Italy, and 99.8% in Spain. This economic analysis shows that first-line treatment with erlotinib, versus standard chemotherapy, is a dominant strategy for EGFR-mutated advanced-stage NSCLC in three European countries. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
NASA Astrophysics Data System (ADS)
Lauvernet, Claire; Noll, Dorothea; Muñoz-Carpena, Rafael; Carluer, Nadia
2014-05-01
In Europe, environmental agencies do the finding a significant presence of contaminants in surface water, which is partly due to pesticide applications. Vegetative filter strips (VFS), often located along rivers, are a common tool among other buffer zones to reduce non point source pollution of water by reducing surface runoff. However, they need to be adapted to the agro-pedo-climatic conditions, both in terms of position and size, in order to be efficient. This is one of the roles of TOPPS-PROWADIS project which involves European experts and stakeholders to develop and recommend Best Management Practices (BMPs) to reduce pesticide transfer by drift or runoff in several European countries. In this context, Irstea developed a guide accompanying the use of different tools, which allows designing VFS by simulating their efficiency to limit transfers. It needs the user to define both a scenario of incoming surface runoff and the buffer zone characteristics. First, the contributive zone (surface, length, slope) is derived from the topography by a GIS tool, HydroDem. ; 2nd, the runoff hydrograph coming in the buffer zone is generated from a rainfall hyetogram typical of the area, using Curve Number theory, taking into account soil characteristics. The VFS's optimal width is then deduced for a given desired efficiency (for example 70% of runoff reduction), by using VFSMOD model, which simulates water, suspended matters (and pesticides) transfer inside a vegetative filter strip. Results also indicate if this kind of buffer zone is relevant in that situation (if too high, another type of buffer zone may be more relevant, for example constructed wetland). This method assumes that the user supplies quite a lot of field knowledge and data, which are not always easily available. In order to fill in the lack of real data, a set of virtual scenarios was tested, which is supposed to cover a large range of agro-pedo-climatic conditions in Europe, considering both the upslope agricultural field and the VFS characteristics. These scenarios are based on: 2 types of climates (North and South-west of France), different rainfall intensities and durations, different lengths and slopes of hillslope, different humidity conditions, 4 soil types (silt loam, sandy loam, clay loam, sandy clay loam), 2 crops (wheat and corn) for the contributive area, 2 water table depths (1m and 2.5m) and 4 soil types for the VFS. The sizing method was applied for all these scenarios, and a sensitivity analysis of the VFS optimal length was performed for all the input parameters in order to understand their influence, and to identify for which a special care has to be given. Based on that sensitivity analysis, a metamodel has been developed. The idea is to simplify the whole toolchain and to make it possible to perform the buffer sizing by using a unique tool and a smaller set of parameters, given the available information from the end users. We first compared several mathematical methods to compute the metamodel, and then validated them on an agricultural watershed with real data in the North-West of France.