Sample records for second-order sensitivity analysis

  1. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  2. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  3. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  4. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  5. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  6. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  7. Cascaded Amplitude Modulations in Sound Texture Perception

    PubMed Central

    McWalter, Richard; Dau, Torsten

    2017-01-01

    Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches. PMID:28955191

  8. Cascaded Amplitude Modulations in Sound Texture Perception.

    PubMed

    McWalter, Richard; Dau, Torsten

    2017-01-01

    Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  9. Differential effects of exogenous and endogenous attention on second-order texture contrast sensitivity

    PubMed Central

    Barbot, Antoine; Landy, Michael S.; Carrasco, Marisa

    2012-01-01

    The visual system can use a rich variety of contours to segment visual scenes into distinct perceptually coherent regions. However, successfully segmenting an image is a computationally expensive process. Previously we have shown that exogenous attention—the more automatic, stimulus-driven component of spatial attention—helps extract contours by enhancing contrast sensitivity for second-order, texture-defined patterns at the attended location, while reducing sensitivity at unattended locations, relative to a neutral condition. Interestingly, the effects of exogenous attention depended on the second-order spatial frequency of the stimulus. At parafoveal locations, attention enhanced second-order contrast sensitivity to relatively high, but not to low second-order spatial frequencies. In the present study we investigated whether endogenous attention—the more voluntary, conceptually-driven component of spatial attention—affects second-order contrast sensitivity, and if so, whether its effects are similar to those of exogenous attention. To that end, we compared the effects of exogenous and endogenous attention on the sensitivity to second-order, orientation-defined, texture patterns of either high or low second-order spatial frequencies. The results show that, like exogenous attention, endogenous attention enhances second-order contrast sensitivity at the attended location and reduces it at unattended locations. However, whereas the effects of exogenous attention are a function of the second-order spatial frequency content, endogenous attention affected second-order contrast sensitivity independent of the second-order spatial frequency content. This finding supports the notion that both exogenous and endogenous attention can affect second-order contrast sensitivity, but that endogenous attention is more flexible, benefitting performance under different conditions. PMID:22895879

  10. The effect of displacement on sensitivity to first- and second-order global motion in 5-year-olds and adults.

    PubMed

    Ellemberg, D; Lewis, T L; Maurer, D; Lee, B; Ledgeway, T; Guilemot, J P; Lepore, F

    2010-01-01

    We compared the development of sensitivity to first- versus second-order global motion in 5-year-olds (n=24) and adults (n=24) tested at three displacements (0.1, 0.5 and 1.0 degrees). Sensitivity was measured with Random-Gabor Kinematograms (RGKs) formed with luminance-modulated (first-order) or contrast-modulated (second-order) concentric Gabor patterns. Five-year-olds were less sensitive than adults to the direction of both first- and second-order global motion at every displacement tested. In addition, the immaturity was smallest at the smallest displacement, which required the least spatial integration, and smaller for first-order than for second-order global motion at the middle displacement. The findings suggest that the development of sensitivity to global motion is limited by the development of spatial integration and by different rates of development of sensitivity to first- versus second-order signals.

  11. First- and second-order sensitivity analysis of linear and nonlinear structures

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Mroz, Z.

    1986-01-01

    This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.

  12. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  13. Pavlovian second-order conditioned analgesia.

    PubMed

    Ross, R T

    1986-01-01

    Three experiments with rat subjects assessed conditioned analgesia in a Pavlovian second-order conditioning procedure by using inhibition of responding to thermal stimulation as an index of pain sensitivity. In Experiment 1, rats receiving second-order conditioning showed longer response latencies during a test of pain sensitivity in the presence of the second-order conditioned stimulus (CS) than rats receiving appropriate control procedures. Experiment 2 found that extinction of the first-order CS had no effect on established second-order conditioned analgesia. Experiment 3 evaluated the effects of post second-order conditioning pairings of morphine and the shock unconditioned stimulus (US). Rats receiving paired morphine-shock presentations showed significantly shorter response latencies during a hot-plate test of pain sensitivity in the presence of the second-order CS than did groups of rats receiving various control procedures; second-order analgesia was attenuated. These data extend the associative account of conditioned analgesia to second-order conditioning situations and are discussed in terms of the mediation of both first- and second-order analgesia by an association between the CS and a representation or expectancy of the US, which may directly activate endogenous pain inhibition systems.

  14. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  15. (U) Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses Using Ray-Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less

  16. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  17. The Development of Expert Face Processing: Are Infants Sensitive to Normal Differences in Second-Order Relational Information?

    ERIC Educational Resources Information Center

    Hayden, Angela; Bhatt, Ramesh S.; Reed, Andrea; Corbly, Christine R.; Joseph, Jane E.

    2007-01-01

    Sensitivity to second-order relational information (i.e., spatial relations among features such as the distance between eyes) is a vital part of achieving expertise with face processing. Prior research is unclear on whether infants are sensitive to second-order differences seen in typical human populations. In the current experiments, we examined…

  18. Kinematic sensitivity of robot manipulators

    NASA Technical Reports Server (NTRS)

    Vuskovic, Marko I.

    1989-01-01

    Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.

  19. Processing Deficits of Motion of Contrast-Modulated Gratings in Anisometropic Amblyopia

    PubMed Central

    Liu, Zhongjian; Hu, Xiaopeng; Yu, Yong-Qiang; Zhou, Yifeng

    2014-01-01

    Several studies have indicated substantial processing deficits for static second-order stimuli in amblyopia. However, less is known about the perception of second-order moving gratings. To investigate this issue, we measured the contrast sensitivity for second-order (contrast-modulated) moving gratings in seven anisometropic amblyopes and ten normal controls. The measurements were performed with non-equated carriers and a series of equated carriers. For comparison, the sensitivity for first-order motion and static second-order stimuli was also measured. Most of the amblyopic eyes (AEs) showed reduced sensitivity for second-order moving gratings relative to their non-amblyopic eyes (NAEs) and the dominant eyes (CEs) of normal control subjects, even when the detectability of the noise carriers was carefully controlled, suggesting substantial processing deficits of motion of contrast-modulated gratings in anisometropic amblyopia. In contrast, the non-amblyopic eyes of the anisometropic amblyopes were relatively spared. As a group, NAEs showed statistically comparable performance to CEs. We also found that contrast sensitivity for static second-order stimuli was strongly impaired in AEs and part of the NAEs of anisometropic amblyopes, consistent with previous studies. In addition, some amblyopes showed impaired performance in perception of static second-order stimuli but not in that of second-order moving gratings. These results may suggest a dissociation between the processing of static and moving second-order gratings in anisometropic amblyopia. PMID:25409477

  20. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  1. Computer program for analysis of imperfection sensitivity of ring stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1971-01-01

    A FORTRAN 4 digital computer program is presented for the initial postbuckling and imperfection sensitivity analysis of bifurcation buckling modes for ring-stiffened orthotropic multilayered shells of revolution. The boundary value problem for the second-order contribution to the buckled state was solved by the forward integration technique using the Runge-Kutta method. The effects of nonlinear prebuckling states and live pressure loadings are included.

  2. Extensions and applications of a second-order landsurface parameterization

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1983-01-01

    Extensions and applications of a second order land surface parameterization, proposed by Andreou and Eagleson are developed. Procedures for evaluating the near surface storage depth used in one cell land surface parameterizations are suggested and tested by using the model. Sensitivity analysis to the key soil parameters is performed. A case study involving comparison with an "exact" numerical model and another simplified parameterization, under very dry climatic conditions and for two different soil types, is also incorporated.

  3. Practical considerations for a second-order directional hearing aid microphone system

    NASA Astrophysics Data System (ADS)

    Thompson, Stephen C.

    2003-04-01

    First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.

  4. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  5. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  6. Negative axial strain sensitivity in gold-coated eccentric fiber Bragg gratings

    PubMed Central

    Chah, Karima; Kinet, Damien; Caucheteur, Christophe

    2016-01-01

    New dual temperature and strain sensor has been designed using eccentric second-order fiber Bragg gratings produced in standard single-mode optical fiber by point-by-point direct writing technique with tight focusing of 800 nm femtosecond laser pulses. With thin gold coating at the grating location, we experimentally show that such gratings exhibit a transmitted amplitude spectrum composed by the Bragg and cladding modes resonances that extend in a wide spectral range exceeding one octave. An overlapping of the first order and second order spectrum is then observed. High-order cladding modes belonging to the first order Bragg resonance coupling are close to the second order Bragg resonance, they show a negative axial strain sensitivity (−0.55 pm/με) compared to the Bragg resonance (1.20 pm/με) and the same temperature sensitivity (10.6 pm/°C). With this well conditioned system, temperature and strain can be determined independently with high sensitivity, in a wavelength range limited to a few nanometers. PMID:27901059

  7. Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model

    ERIC Educational Resources Information Center

    Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum

    2009-01-01

    Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…

  8. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  9. Spatial interactions reveal inhibitory cortical networks in human amblyopia.

    PubMed

    Wong, Erwin H; Levi, Dennis M; McGraw, Paul V

    2005-10-01

    Humans with amblyopia have a well-documented loss of sensitivity for first-order, or luminance defined, visual information. Recent studies show that they also display a specific loss of sensitivity for second-order, or contrast defined, visual information; a type of image structure encoded by neurons found predominantly in visual area A18/V2. In the present study, we investigate whether amblyopia disrupts the normal architecture of spatial interactions in V2 by determining the contrast detection threshold of a second-order target in the presence of second-order flanking stimuli. Adjacent flanks facilitated second-order detectability in normal observers. However, in marked contrast, they suppressed detection in each eye of the majority of amblyopic observers. Furthermore, strabismic observers with no loss of visual acuity show a similar pattern of detection suppression. We speculate that amblyopia results in predominantly inhibitory cortical interactions between second-order neurons.

  10. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  11. First and Higher Order Effects on Zero Order Radiative Transfer Model

    NASA Astrophysics Data System (ADS)

    Neelam, M.; Mohanty, B.

    2014-12-01

    Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.

  12. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  13. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  14. Low-Energy Defibrillation Failure Correction is Possible Through Nonlinear Analysis of Spatiotemporal Arrhythmia Data

    NASA Astrophysics Data System (ADS)

    Simonotto, Jennifer; Furman, Michael; Beaver, Thomas; Spano, Mark; Kavanagh, Katherine; Iden, Jason; Hu, Gang; Ditto, William

    2004-03-01

    Explanted Porcine hearts were Langendorff-perfused, administered a voltage-sensitive fluorescent dye (Di-4-ANEPPS) and illuminated with a ND:Yag laser (532 nm); the change in fluorescence resulting from electrical activity on the heart surface was recorded with an 80 x 80 pixel CCD camera at 1000 frames per second. The heart was put into fibrillation with rapid ventricular pacing and shocks were administered close to the defibrillation threshold. Defibrillation failure data was analyzed using synchronization, space-time volume plots and recurrence quantification. Preliminary spatiotemporal synchronization results reveal a short window of time ( 1 second) after defibrillation failure in which the disordered electrical activity becomes ordered; this ordered period occurs 4-5 seconds after the defibrillation shock. Recurrence analysis of a single time series confirmed these results, thus opening the avenue for dynamic defibrillators that can detect an optimal window for cardioversion.

  15. Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com

    2015-04-07

    We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less

  16. Collagen morphology and texture analysis: from statistics to classification

    PubMed Central

    Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.

    2013-01-01

    In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580

  17. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  18. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  19. Lorentz-Symmetry Test at Planck-Scale Suppression With a Spin-Polarized 133Cs Cold Atom Clock.

    PubMed

    Pihan-Le Bars, H; Guerlin, C; Lasseri, R-D; Ebran, J-P; Bailey, Q G; Bize, S; Khan, E; Wolf, P

    2018-06-01

    We present the results of a local Lorentz invariance (LLI) test performed with the 133 Cs cold atom clock FO2, hosted at SYRTE. Such a test, relating the frequency shift between 133 Cs hyperfine Zeeman substates with the Lorentz violating coefficients of the standard model extension (SME), has already been realized by Wolf et al. and led to state-of-the-art constraints on several SME proton coefficients. In this second analysis, we used an improved model, based on a second-order Lorentz transformation and a self-consistent relativistic mean field nuclear model, which enables us to extend the scope of the analysis from purely proton to both proton and neutron coefficients. We have also become sensitive to the isotropic coefficient , another SME coefficient that was not constrained by Wolf et al. The resulting limits on SME coefficients improve by up to 13 orders of magnitude the present maximal sensitivities for laboratory tests and reach the generally expected suppression scales at which signatures of Lorentz violation could appear.

  20. Second-order relational face processing is applied to faces of different race and photographic contrast.

    PubMed

    Matheson, H E; Bilsbury, T G; McMullen, P A

    2012-03-01

    A large body of research suggests that faces are processed by a specialized mechanism within the human visual system. This specialized mechanism is made up of subprocesses (Maurer, LeGrand, & Mondloch, 2002). One subprocess, called second- order relational processing, analyzes the metric distances between face parts. Importantly, it is well established that other-race faces and contrast-reversed faces are associated with impaired performance on numerous face processing tasks. Here, we investigated the specificity of second-order relational processing by testing how this process is applied to faces of different race and photographic contrast. Participants completed a feature displacement discrimination task, directly measuring the sensitivity to second-order relations between face parts. Across three experiments we show that, despite absolute differences in sensitivity in some conditions, inversion impaired performance in all conditions. The presence of robust inversion effects for all faces suggests that second-order relational processing can be applied to faces of different race and photographic contrast.

  1. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  2. The novel application of Benford's second order analysis for monitoring radiation output in interventional radiology.

    PubMed

    Cournane, S; Sheehy, N; Cooke, J

    2014-06-01

    Benford's law is an empirical observation which predicts the expected frequency of digits in naturally occurring datasets spanning multiple orders of magnitude, with the law having been most successfully applied as an audit tool in accountancy. This study investigated the sensitivity of the technique in identifying system output changes using simulated changes in interventional radiology Dose-Area-Product (DAP) data, with any deviations from Benford's distribution identified using z-statistics. The radiation output for interventional radiology X-ray equipment is monitored annually during quality control testing; however, for a considerable portion of the year an increased output of the system, potentially caused by engineering adjustments or spontaneous system faults may go unnoticed, leading to a potential increase in the radiation dose to patients. In normal operation recorded examination radiation outputs vary over multiple orders of magnitude rendering the application of normal statistics ineffective for detecting systematic changes in the output. In this work, the annual DAP datasets complied with Benford's first order law for first, second and combinations of the first and second digits. Further, a continuous 'rolling' second order technique was devised for trending simulated changes over shorter timescales. This distribution analysis, the first employment of the method for radiation output trending, detected significant changes simulated on the original data, proving the technique useful in this case. The potential is demonstrated for implementation of this novel analysis for monitoring and identifying change in suitable datasets for the purpose of system process control. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  3. Theory of Mind and Sensitivity to Teacher and Peer Criticism among Japanese Children

    ERIC Educational Resources Information Center

    Mizokawa, Ai

    2015-01-01

    This study investigated sensitivity to teacher and peer criticism among 89 Japanese 6-year-olds and examined the connection between sensitivity to criticism and first-order and second-order theory of mind separately. Participants completed a common test battery that included tasks assessing sensitivity to criticism (teacher or peer condition), the…

  4. Sensitivity analysis of complex coupled systems extended to second and higher order derivatives

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    In design of engineering systems, the what if questions often arise such as: what will be the change of the aircraft payload, if the wing aspect ratio is incremented by 10 percent. Answers to such questions are commonly sought by incrementing the pertinent variable, and reevaluating the major disciplinary analyses involved. These analyses are contributed by engineering disciplines that are, usually, coupled, as are the aerodynamics, structures, and performance in the context of the question above. The what if questions can be answered precisely by computation of the derivatives. A method for calculation of the first derivatives has been developed previously. An algorithm is presented for calculation of the second and higher order derivatives.

  5. [Analysis and experimental verification of sensitivity and SNR of laser warning receiver].

    PubMed

    Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue

    2009-01-01

    In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.

  6. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  7. Age-related changes in perception of movement in driving scenes.

    PubMed

    Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M

    2014-07-01

    Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  8. Development and Application of Modern Optimal Controllers for a Membrane Structure Using Vector Second Order Form

    NASA Astrophysics Data System (ADS)

    Ferhat, Ipar

    With increasing advancement in material science and computational power of current computers that allows us to analyze high dimensional systems, very light and large structures are being designed and built for aerospace applications. One example is a reflector of a space telescope that is made of membrane structures. These reflectors are light and foldable which makes the shipment easy and cheaper unlike traditional reflectors made of glass or other heavy materials. However, one of the disadvantages of membranes is that they are very sensitive to external changes, such as thermal load or maneuvering of the space telescope. These effects create vibrations that dramatically affect the performance of the reflector. To overcome vibrations in membranes, in this work, piezoelectric actuators are used to develop distributed controllers for membranes. These actuators generate bending effects to suppress the vibration. The actuators attached to a membrane are relatively thick which makes the system heterogeneous; thus, an analytical solution cannot be obtained to solve the partial differential equation of the system. Therefore, the Finite Element Model is applied to obtain an approximate solution for the membrane actuator system. Another difficulty that arises with very flexible large structures is the dimension of the discretized system. To obtain an accurate result, the system needs to be discretized using smaller segments which makes the dimension of the system very high. This issue will persist as long as the improving technology will allow increasingly complex and large systems to be designed and built. To deal with this difficulty, the analysis of the system and controller development to suppress the vibration are carried out using vector second order form as an alternative to vector first order form. In vector second order form, the number of equations that need to be solved are half of the number equations in vector first order form. Analyzing the system for control characteristics such as stability, controllability and observability is a key step that needs to be carried out before developing a controller. This analysis determines what kind of system is being modeled and the appropriate approach for controller development. Therefore, accuracy of the system analysis is very crucial. The results of the system analysis using vector second order form and vector first order form show the computational advantages of using vector second order form. Using similar concepts, LQR and LQG controllers, that are developed to suppress the vibration, are derived using vector second order form. To develop a controller using vector second order form, two different approaches are used. One is reducing the size of the Algebraic Riccati Equation to half by partitioning the solution matrix. The other approach is using the Hamiltonian method directly in vector second order form. Controllers are developed using both approaches and compared to each other. Some simple solutions for special cases are derived for vector second order form using the reduced Algebraic Riccati Equation. The advantages and drawbacks of both approaches are explained through examples. System analysis and controller applications are carried out for a square membrane system with four actuators. Two different systems with different actuator locations are analyzed. One system has the actuators at the corners of the membrane, the other has the actuators away from the corners. The structural and control effect of actuator locations are demonstrated with mode shapes and simulations. The results of the controller applications and the comparison of the vector first order form with the vector second order form demonstrate the efficacy of the controllers.

  9. (U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    2017-04-26

    The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.

  10. Adaptive interference cancel filter for evoked potential using high-order cumulants.

    PubMed

    Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei

    2004-01-01

    This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.

  11. Forced oscillations of cracked beam under the stochastic cyclic loading

    NASA Astrophysics Data System (ADS)

    Matsko, I.; Javors'kyj, I.; Yuzefovych, R.; Zakrzewski, Z.

    2018-05-01

    An analysis of forced oscillations of cracked beam using statistical methods for periodically correlated random processes is presented. The oscillation realizations are obtained on the basis of numerical solutions of differential equations of the second order, for the case when applied force is described by a sum of harmonic and stationary random process. It is established that due to crack appearance forced oscillations acquire properties of second-order periodical non-stationarity. It is shown that in a super-resonance regime covariance and spectral characteristics, which describe non-stationary structure of forced oscillations, are more sensitive to crack growth than the characteristics of the oscillation's deterministic part. Using diagnostic indicators formed on their basis allows the detection of small cracks.

  12. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE PAGES

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-31

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  13. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  14. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  15. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  16. Highly sensitive quantitation of pesticides in fruit juice samples by modeling four-way data gathered with high-performance liquid chromatography with fluorescence excitation-emission detection.

    PubMed

    Montemurro, Milagros; Pinto, Licarion; Véras, Germano; de Araújo Gomes, Adriano; Culzoni, María J; Ugulino de Araújo, Mário C; Goicoechea, Héctor C

    2016-07-01

    A study regarding the acquisition and analytical utilization of four-way data acquired by monitoring excitation-emission fluorescence matrices at different elution time points in a fast HPLC procedure is presented. The data were modeled with three well-known algorithms: PARAFAC, U-PLS/RTL and MCR-ALS, the latter conveniently adapted to model third-order data. The second-order advantage was exploited when analyzing samples containing uncalibrated components. The best results were furnished with the algorithm U-PLS/RTL. This fact is indicative of both no peak time shifts occurrence among samples and high colinearity among spectra. Besides, this latent-variable structured algorithm is capable of better handle the need of achieving high sensitivity for the analysis of one of the analytes. In addition, a significant enhancement in both predictions and analytical figures of merit was observed for carbendazim, thiabendazole, fuberidazole, carbofuran, carbaryl and 1-naphtol, when going from second- to third-order data. LODs obtained were ranged between 0.02 and 2.4μgL(-1). Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

  18. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  19. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  20. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  1. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  2. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  3. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  4. Second order gradiometer and dc SQUID integrated on a planar substrate

    NASA Astrophysics Data System (ADS)

    van Nieuwenhuyzen, G. J.; de Waal, V. J.

    1985-02-01

    An integrated system of a thin-film niobium dc superconducting quantum interference device (SQUID) and a second order gradiometer on a planar substrate is described. The system consists of a dc SQUID with eight loops in parallel, each sensitive to the second derivative ∂2Bz/∂x2 of the magnetic field. The calculated SQUID inductance is 1.3 nH. With an overall size of 16×16.5 mm2 a sensitivity of 1.5×10-9 Tm-2 Hz-1/2 is obtained. The measured transfer function for uniform fields perpendicular to the plane of the gradiometer is 2.1×10-7 T Φ-10.

  5. Second-order distributed-feedback surface plasmon resonator for single-mode fiber end-facet biosensing

    NASA Astrophysics Data System (ADS)

    Lei, Zeyu; Zhou, Xin; Yang, Jie; He, Xiaolong; Wang, Yalin; Yang, Tian

    2017-04-01

    Integrating surface plasmon resonance (SPR) devices upon single-mode fiber (SMF) end facets renders label-free biosensing systems that have a dip-and-read configuration, high compatibility with fiber-optic techniques, and in vivo monitoring capability, which however meets the challenge to match the performance of free-space counterparts. We report a second-order distributed feedback (DFB) SPR cavity on an SMF end facet and its application in protein interaction analysis. In our device, a periodic array of nanoslits in a gold film is used to couple fiber guided lightwaves to surface plasmon polaritons (SPPs) with its first order spatial Fourier component, while the second order spatial Fourier component provides DFB to SPP propagation and produces an SPP bandgap. A phase shift section in the DFB structure introduces an SPR defect state within the SPP bandgap, whose mode profile is optimized to match that of the SMF to achieve a reasonable coupling efficiency. We report an experimental refractive index sensitivity of 628 nm RIU-1, a figure-of-merit of 80 RIU-1, and a limit of detection of 7 × 10-6 RIU. The measurement of the real-time interaction between human immunoglobulin G molecules and their antibodies is demonstrated.

  6. Lessons Learned from the Frenchman Flat Flow and Transport Modeling External Peer Review

    NASA Astrophysics Data System (ADS)

    Becker, N. M.; Crowe, B. M.; Ruskauff, G.; Kwicklis, E. M.; Wilborn, B.

    2011-12-01

    The objective of the U.S. Department of Energy's Underground Test Area Program program is to forecast, using computer modeling, the contaminant boundary of radionuclide transport in groundwater at the Nevada National Security Site that exceeds the Safe Drinking Water Act after 1000 yrs. This objective is defined within the Federal Facilities Agreement and Consent Order between the Department of Energy, Department of Defense and State of Nevada Division of Environmental Protection . At one of the Corrective Action Units, Frenchman Flat, a Phase I flow and transport model underwent peer review in 1999 to determine if the model approach, assumptions and results adequate to be used as a decision tool as a basis to negotiate a compliance boundary with Nevada Division of Environmental Protection. The external peer review decision was that the model was not fully tested under a full suite of possible conceptual models, including boundary conditions, flow mechanisms, other transport processes, hydrological framework models, sensitivity and uncertainty analysis, etc. The program went back to collect more data, expand modeling to consider other alternatives that were not adequately tested, and conduct sensitivity and uncertainty analysis. A second external peer review was held in August 2010. Their conclusion that the new Frenchman Flat flow and transport modeling analysis were adequate as a decision tool and that the model was ready to advance to the next step in the Federal Facilities Agreement and Consent Order strategy. We will discuss the processes to changing the modeling that occurred between the first and second peer reviews, and then present the second peer review general comments. Finally, we present the lessons learned from the total model acceptance process required for federal regulatory compliance.

  7. Automatic small bowel tumor diagnosis by using multi-scale wavelet-based analysis in wireless capsule endoscopy images.

    PubMed

    Barbosa, Daniel C; Roupar, Dalila B; Ramos, Jaime C; Tavares, Adriano C; Lima, Carlos S

    2012-01-11

    Wireless capsule endoscopy has been introduced as an innovative, non-invasive diagnostic technique for evaluation of the gastrointestinal tract, reaching places where conventional endoscopy is unable to. However, the output of this technique is an 8 hours video, whose analysis by the expert physician is very time consuming. Thus, a computer assisted diagnosis tool to help the physicians to evaluate CE exams faster and more accurately is an important technical challenge and an excellent economical opportunity. The set of features proposed in this paper to code textural information is based on statistical modeling of second order textural measures extracted from co-occurrence matrices. To cope with both joint and marginal non-Gaussianity of second order textural measures, higher order moments are used. These statistical moments are taken from the two-dimensional color-scale feature space, where two different scales are considered. Second and higher order moments of textural measures are computed from the co-occurrence matrices computed from images synthesized by the inverse wavelet transform of the wavelet transform containing only the selected scales for the three color channels. The dimensionality of the data is reduced by using Principal Component Analysis. The proposed textural features are then used as the input of a classifier based on artificial neural networks. Classification performances of 93.1% specificity and 93.9% sensitivity are achieved on real data. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis systems to assist physicians in their clinical practice.

  8. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  9. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  10. Late Bilinguals Are Sensitive to Unique Aspects of Second Language Processing: Evidence from Clitic Pronouns Word-Order

    PubMed Central

    Rossi, Eleonora; Diaz, Michele; Kroll, Judith F.; Dussias, Paola E.

    2017-01-01

    In two self-paced reading experiments we asked whether late, highly proficient, English–Spanish bilinguals are able to process language-specific morpho-syntactic information in their second language (L2). The processing of Spanish clitic pronouns’ word order was tested in two sentential constructions. Experiment 1 showed that English–Spanish bilinguals performed similarly to Spanish–English bilinguals and revealed sensitivity to word order violations for a grammatical structure unique to the L2. Experiment 2 replicated the pattern observed for native speakers in Experiment 1 with a group of monolingual Spanish speakers, demonstrating the stability of processing clitic pronouns in the native language. Taken together, the results show that late bilinguals can process aspects of grammar that are encoded in L2-specific linguistic constructions even when the structure is relatively subtle and not affected for native speakers by the presence of a second language. PMID:28367130

  11. A modification of the U.S. Geological Survey one-sixth order semiquantitative spectrographic method for the analysis of geologic materials that improves limits of determination of some volatile to moderately volatile elements

    USGS Publications Warehouse

    Detra, D.E.; Cooley, Elmo F.

    1988-01-01

    A modification of the one-sixth order semi-quantitative emission spectrographic method for the analysis of 30 elements in geologic materials (Grimes and Marranzino 1968) improves the limits of determination of some volatile to moderately volatile elements. The modification uses a compound-pendulum-mounted filter to regulate the amount of emitted light passing into the spectrograph. One hundred percent transmission of emitted light is allowed during the initial 20 seconds of the burn, then continually reduced to 40 percent over the next 32 seconds using the pendulum-mounted filter, and followed by an additional 68 seconds of burn time. The reduction of light transmission during the latter part of the burn decreases spectral background and the line emission of less volatile elements commonly responsible for problem-causing interferences. The sensitivity of the method for some geochemically important trace elements commonly determined in mineral exploration (Ag, As, Au, Be, Bi, Cd, Cr, Cu, Pb, Sb, Sn, and Zn) is improved up to five-fold under ideal conditions without compromising precision or accuracy

  12. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  13. Study of water based nanofluid flows in annular tubes using numerical simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Siadaty, Moein; Kazazi, Mohsen

    2018-04-01

    Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.

  14. Derivatives of buckling loads and vibration frequencies with respect to stiffness and initial strain parameters

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Cohen, Gerald A.; Mroz, Zenon

    1990-01-01

    A uniform variational approach to sensitivity analysis of vibration frequencies and bifurcation loads of nonlinear structures is developed. Two methods of calculating the sensitivities of bifurcation buckling loads and vibration frequencies of nonlinear structures, with respect to stiffness and initial strain parameters, are presented. A direct method requires calculation of derivatives of the prebuckling state with respect to these parameters. An adjoint method bypasses the need for these derivatives by using instead the strain field associated with the second-order postbuckling state. An operator notation is used and the derivation is based on the principle of virtual work. The derivative computations are easily implemented in structural analysis programs. This is demonstrated by examples using a general purpose, finite element program and a shell-of-revolution program.

  15. [Absorption spectrum of Quasi-continuous laser modulation demodulation method].

    PubMed

    Shao, Xin; Liu, Fu-Gui; Du, Zhen-Hui; Wang, Wei

    2014-05-01

    A software phase-locked amplifier demodulation method is proposed in order to demodulate the second harmonic (2f) signal of quasi-continuous laser wavelength modulation spectroscopy (WMS) properly, based on the analysis of its signal characteristics. By judging the effectiveness of the measurement data, filter, phase-sensitive detection, digital filtering and other processing, the method can achieve the sensitive detection of quasi-continuous signal The method was verified by using carbon dioxide detection experiments. The WMS-2f signal obtained by the software phase-locked amplifier and the high-performance phase-locked amplifier (SR844) were compared simultaneously. The results show that the Allan variance of WMS-2f signal demodulated by the software phase-locked amplifier is one order of magnitude smaller than that demodulated by SR844, corresponding two order of magnitude lower of detection limit. And it is able to solve the unlocked problem caused by the small duty cycle of quasi-continuous modulation signal, with a small signal waveform distortion.

  16. An analysis of the transit times of TrES-1b

    NASA Astrophysics Data System (ADS)

    Steffen, Jason H.; Agol, Eric

    2005-11-01

    The presence of a second planet in a known, transiting-planet system will cause the time between transits to vary. These variations can be used to constrain the orbital elements and mass of the perturbing planet. We analyse the set of transit times of the TrES-1 system given in Charbonneau et al. We find no convincing evidence for a second planet in the TrES-1 system from those data. By further analysis, we constrain the mass that a perturbing planet could have as a function of the semi-major axis ratio of the two planets and the eccentricity of the perturbing planet. Near low-order, mean-motion resonances (within ~1 per cent fractional deviation), we find that a secondary planet must generally have a mass comparable to or less than the mass of the Earth - showing that these data are the first to have sensitivity to sub-Earth-mass planets. We compare the sensitivity of this technique to the mass of the perturbing planet with future, high-precision radial velocity measurements.

  17. Design and development of second order MEMS sound pressure gradient sensor

    NASA Astrophysics Data System (ADS)

    Albahri, Shehab

    The design and development of a second order MEMS sound pressure gradient sensor is presented in this dissertation. Inspired by the directional hearing ability of the parasitoid fly, Ormia ochracea, a novel first order directional microphone that mimics the mechanical structure of the fly's ears and detects the sound pressure gradient has been developed. While the first order directional microphones can be very beneficial in a large number of applications, there is great potential for remarkable improvements in performance through the use of second order systems. The second order directional microphone is able to provide a theoretical improvement in Sound to Noise ratio (SNR) of 9.5dB, compared to the first-order system that has its maximum SNR of 6dB. Although second order microphone is more sensitive to sound angle of incidence, the nature of the design and fabrication process imposes different factors that could lead to deterioration in its performance. The first Ormia ochracea second order directional microphone was designed in 2004 and fabricated in 2006 at Binghamton University. The results of the tested parts indicate that the Ormia ochracea second order directional microphone performs mostly as an Omni directional microphone. In this work, the previous design is reexamined and analyzed to explain the unexpected results. A more sophisticated tool implementing a finite element package ANSYS is used to examine the previous design response. This new tool is used to study different factors that used to be ignored in the previous design, mainly; response mismatch and fabrication uncertainty. A continuous model using Hamilton's principle is introduced to verify the results using the new method. Both models agree well, and propose a new way for optimizing the second order directional microphone using geometrical manipulation. In this work we also introduce a new fabrication process flow to increase the fabrication yield. The newly suggested method uses the shell layered analysis method in ANSYS. The developed models simulate the fabricated chips at different stages; with the stress at each layer is introduced using thermal loading. The results indicate a new fabrication process flow to increase the rigidity of the composite layers, and countering the deformation caused by the high stress in the thermal oxide layer.

  18. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    PubMed

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Method development for the determination of coumarin compounds by capillary electrophoresis with indirect laser-induced fluorescence detection.

    PubMed

    Wang, Weiping; Tang, Jianghong; Wang, Shumin; Zhou, Lei; Hu, Zhide

    2007-04-27

    A capillary zone electrophoresis (CZE) with indirect laser-induced fluorescence detection (ILIFD) method is described for the simultaneous determination of esculin, esculetin, isofraxidin, genistein, naringin and sophoricoside. The baseline separation was achieved within 5 min with running buffer (pH 9.4) composed of 5mM borate, 20% methanol (v/v) as organic modifier, 10(-7)M fluorescein sodium as background fluorophore and 20 kV of applied voltage at 30 degrees C of cartridge temperature. Good linearity relationships (correlation coefficients >0.9900) between the second-order derivative peak-heights (RFU) and concentrations of the analytes (mol L(-1)) were obtained. The detection limits for all analytes in second-order derivative electrophoregrams were in the range of 3.8-15 microM. The RSD data of intra-day for migration times and second-order derivative peak-height were less than 0.95 and 5.02%, respectively. This developed method was applied to the analysis of the courmin compounds in herb plants with recoveries in the range of 94.7-102.1%. In this work, although the detection sensitivity was lower than that of direct LIF, yet the method would extend the application range of LIF detection.

  20. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    PubMed

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  1. Proceedings of the Second Annual Symposium for Nondestructive Evaluation of Bond Strength

    NASA Technical Reports Server (NTRS)

    Roberts, Mark J. (Compiler)

    1999-01-01

    Ultrasonics, microwaves, optically stimulated electron emission (OSEE), and computational chemistry approaches have shown relevance to bond strength determination. Nonlinear ultrasonic nondestructive evaluation methods, however, have shown the most effectiveness over other methods on adhesive bond analysis. Correlation to changes in higher order material properties due to microstructural changes using nonlinear ultrasonics has been shown related to bond strength. Nonlinear ultrasonic energy is an order of magnitude more sensitive than linear ultrasound to these material parameter changes and to acoustic velocity changes caused by the acoustoelastic effect when a bond is prestressed. Signal correlations between non-linear ultrasonic measurements and initialization of bond failures have been measured. This paper reviews bond strength research efforts presented by university and industry experts at the Second Annual Symposium for Nondestructive Evaluation of Bond Strength organized by the NDE Sciences Branch at NASA Langley in November 1998.

  2. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  3. The Development of Perceptual Sensitivity to Second-Order Facial Relations in Children

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Gallay, Mathieu; Durand, Karine; Robichon, Fabrice

    2010-01-01

    This study investigated children's perceptual ability to process second-order facial relations. In total, 78 children in three age groups (7, 9, and 11 years) and 28 adults were asked to say whether the eyes were the same distance apart in two side-by-side faces. The two faces were similar on all points except the space between the eyes, which was…

  4. Fast calculation of the sensitivity matrix in magnetic induction tomography by tetrahedral edge finite elements and the reciprocity theorem.

    PubMed

    Hollaus, K; Magele, C; Merwa, R; Scharfetter, H

    2004-02-01

    Magnetic induction tomography of biological tissue is used to reconstruct the changes in the complex conductivity distribution by measuring the perturbation of an alternating primary magnetic field. To facilitate the sensitivity analysis and the solution of the inverse problem a fast calculation of the sensitivity matrix, i.e. the Jacobian matrix, which maps the changes of the conductivity distribution onto the changes of the voltage induced in a receiver coil, is needed. The use of finite differences to determine the entries of the sensitivity matrix does not represent a feasible solution because of the high computational costs of the basic eddy current problem. Therefore, the reciprocity theorem was exploited. The basic eddy current problem was simulated by the finite element method using symmetric tetrahedral edge elements of second order. To test the method various simulations were carried out and discussed.

  5. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  6. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  7. Intrathecal treatment with MK-801 suppresses thermal nociceptive responses and prevents c-fos immunoreactivity induced in rat lumbar spinal cord neurons.

    PubMed

    Huang, W; Simpson, R K

    1999-09-01

    Sensitization of the second order neurons in the spinal dorsal horn after somatic noxious stimuli is partly mediated by the N-methyl-D-aspartate (NMDA) subtype of the glutamate receptor. These neurons also express c-Fos immunoreactivity in response to the somatic noxious stimuli. The present study assessed the influence of intrathecal pre-treatment with MK-801, a non-competitive antagonist of NMDA receptor, on thermal sensitization following peripheral noxious heat stimulation. In addition, the influence of MK-801 on c-Fos immunoreactivity in the rat lumbar spinal cord neurons after the peripheral noxious heat was examined. Sprague-Dawley rats were subject to intrathecal catheterization and administration of MK-801 or saline before and after noxious heat (52 degrees C) stimulation of rat hindpaws. Thermal sensitization was tested after MK-801 (0.1 mumol 10 microliters-1). Fos-like immunoreactivity was evaluated 2 h after noxious stimulation in a separate group of animals. MK-801 significantly increased the thermal withdrawal threshold by 60% following noxious heat stimulation and reduced c-Fos immunoreactivity in the second order neurons by 70% in the dorsal horn. The study suggests that glutamate plays a pivotal role in the thermal nociceptive pathway and indicates that the NMDA receptor is necessary to maintain normal thermal sensitization, possibly by regulating c-fos gene expression in second order neurons.

  8. Polarization rotation enhancement and scattering mechanisms in waveguide magnetophotonic crystals

    NASA Astrophysics Data System (ADS)

    Levy, Miguel; Li, Rong

    2006-09-01

    Intermodal coupling in photonic band gap optical channels in magnetic garnet films is found to leverage the nonreciprocal polarization rotation. Forward fundamental-mode to high-order mode backscattering yields the largest rotations. The underlying mechanism is traced to the dependence of the grating-coupling constant on the modal refractive index and profile of the propagating beam. Large changes in polarization near the band edges are observed in first and second orders. Extreme sensitivity to linear birefringence exists in second order.

  9. Layer-dependent second-order Raman intensity of Mo S2 and WS e2 : Influence of intervalley scattering

    NASA Astrophysics Data System (ADS)

    Qian, Qingkai; Zhang, Zhaofu; Chen, Kevin J.

    2018-04-01

    Acoustic-phonon Raman scattering, as a defect-induced second-order Raman scattering process (with incident photon scattered by one acoustic phonon at the Brillouin-zone edge and the momentum conservation fulfilled by defect scattering), is used as a sensitive tool to study the defects of transition-metal dichalcogenides (TMDs). Moreover, second-order Raman scattering processes are closely related to the valley depolarization of single-layer TMDs in potential valleytronic applications. Here, the layer dependence of second-order Raman intensity of Mo S2 and WS e2 is studied. The electronic band structures of Mo S2 and WS e2 are modified by the layer thicknesses; hence, the resonance conditions for both first-order and second-order Raman scattering processes are tuned. In contrast to the first-order Raman scattering, second-order Raman scattering of Mo S2 and WS e2 involves additional intervalley scattering of electrons by phonons with large momenta. As a result, the electron states that contribute most to the second-order Raman intensity are different from that to first-order process. A weaker layer-tuned resonance enhancement of second-order Raman intensity is observed for both Mo S2 and WS e2 . Specifically, when the incident laser has photon energy close to the optical band gap and the Raman spectra are normalized by the first-order Raman peaks, single-layer Mo S2 or WS e2 has the strongest second-order Raman intensity. This layer-dependent second-order Raman intensity can be further utilized as an indicator to identify the layer number of Mo S2 and WS e2 .

  10. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  11. A comparison of second order derivative based models for time domain reflectometry wave form analysis

    USDA-ARS?s Scientific Manuscript database

    Adaptive waveform interpretation with Gaussian filtering (AWIGF) and second order bounded mean oscillation operator Z square 2(u,t,r) are TDR analysis methods based on second order differentiation. AWIGF was originally designed for relatively long probe (greater than 150 mm) TDR waveforms, while Z s...

  12. First- and second-order contrast sensitivity functions reveal disrupted visual processing following mild traumatic brain injury.

    PubMed

    Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza

    2016-05-01

    Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Chemometrics-enhanced high performance liquid chromatography-diode array detection strategy for simultaneous determination of eight co-eluted compounds in ten kinds of Chinese teas using second-order calibration method based on alternating trilinear decomposition algorithm.

    PubMed

    Yin, Xiao-Li; Wu, Hai-Long; Gu, Hui-Wen; Zhang, Xiao-Hua; Sun, Yan-Mei; Hu, Yong; Liu, Lu; Rong, Qi-Ming; Yu, Ru-Qin

    2014-10-17

    In this work, an attractive chemometrics-enhanced high performance liquid chromatography-diode array detection (HPLC-DAD) strategy was proposed for simultaneous and fast determination of eight co-eluted compounds including gallic acid, caffeine and six catechins in ten kinds of Chinese teas by using second-order calibration method based on alternating trilinear decomposition (ATLD) algorithm. This new strategy proved to be a useful tool for handling the co-eluted peaks, uncalibrated interferences and baseline drifts existing in the process of chromatographic separation, which benefited from the "second-order advantages", making the determination of gallic acid, caffeine and six catechins in tea infusions within 8 min under a simple mobile phase condition. The average recoveries of the analytes on two selected tea samples ranged from 91.7 to 103.1% with standard deviations (SD) ranged from 1.9 to 11.9%. Figures of merit including sensitivity (SEN), selectivity (SEL), root-mean-square error of prediction (RMSEP) and limit of detection (LOD) have been calculated to validate the accuracy of the proposed method. To further confirm the reliability of the method, a multiple reaction monitoring (MRM) method based on LC-MS/MS was employed for comparison and the obtained results of both methods were consistent with each other. Furthermore, as a universal strategy, this new proposed analytical method was applied for the determination of gallic acid, caffeine and catechins in several other kinds of Chinese teas, including different levels and varieties. Finally, based on the quantitative results, principal component analysis (PCA) was used to conduct a cluster analysis for these Chinese teas. The green tea, Oolong tea and Pu-erh raw tea samples were classified successfully. All results demonstrated that the proposed method is accurate, sensitive, fast, universal and ideal for the rapid, routine analysis and discrimination of gallic acid, caffeine and catechins in Chinese tea samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Assessment of Fiber Chromatic Dispersion Based on Elimination of Second-Order Harmonics in Optical OFDM Single Sideband Modulation Using Mach Zehnder Modulator

    NASA Astrophysics Data System (ADS)

    Patel, Dhananjay; Singh, Vinay Kumar; Dalal, U. D.

    2016-07-01

    This work addresses the analytical and numerical investigations of the transmission performance of an optical Single Sideband (SSB) modulation technique generated by a Mach Zehnder Modulator (MZM) with a 90° and 120° hybrid coupler. It takes into account the problem of chromatic dispersion in single mode fibers in Passive Optical Networks (PON), which severely degrades the performance of the system. Considering the transmission length of the fiber, the SSB modulation generated by maintaining a phase shift of π/2 between the two electrodes of the MZM provides better receiver sensitivity. However, the power of higher-order harmonics generated due to the nonlinearity of the MZM is directly proportional to the modulation index, making the SSB look like a quasi-double sideband (DSB) and causing power fading due to chromatic dispersion. To eliminate one of the second-order harmonics, the SSB signal based on an MZM with a 120° hybrid coupler is simulated. An analytical model of conventional SSB using 90° and 120° hybrid couplers is established. The latter suppresses unwanted (upper/lower) first-order and second-order (lower/upper) sidebands. For the analysis, a varying quadrature amplitude modulation (QAM) Orthogonal Frequency Division Multiplexing (OFDM) signal with a data rate of 5 Gb/s is upconverted using both of the SSB techniques and is transmitted over a distance of 75 km in Single Mode Fiber (SMF). The simulation results show that the SSB with 120° hybrid coupler proves to be more immune to chromatic dispersion as compared to the conventional SSB technique. This is in tandem with the theoretical analysis presented in the article.

  15. keV-Scale sterile neutrino sensitivity estimation with time-of-flight spectroscopy in KATRIN using self-consistent approximate Monte Carlo

    NASA Astrophysics Data System (ADS)

    Steinbrink, Nicholas M. N.; Behrens, Jan D.; Mertens, Susanne; Ranitzsch, Philipp C.-O.; Weinheimer, Christian

    2018-03-01

    We investigate the sensitivity of the Karlsruhe Tritium Neutrino Experiment (KATRIN) to keV-scale sterile neutrinos, which are promising dark matter candidates. Since the active-sterile mixing would lead to a second component in the tritium β-spectrum with a weak relative intensity of order sin ^2θ ≲ 10^{-6}, additional experimental strategies are required to extract this small signature and to eliminate systematics. A possible strategy is to run the experiment in an alternative time-of-flight (TOF) mode, yielding differential TOF spectra in contrast to the integrating standard mode. In order to estimate the sensitivity from a reduced sample size, a new analysis method, called self-consistent approximate Monte Carlo (SCAMC), has been developed. The simulations show that an ideal TOF mode would be able to achieve a statistical sensitivity of sin ^2θ ˜ 5 × 10^{-9} at one σ , improving the standard mode by approximately a factor two. This relative benefit grows significantly if additional exemplary systematics are considered. A possible implementation of the TOF mode with existing hardware, called gated filtering, is investigated, which, however, comes at the price of a reduced average signal rate.

  16. Probabilistic durability assessment of concrete structures in marine environments: Reliability and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yu, Bo; Ning, Chao-lie; Li, Bing

    2017-03-01

    A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.

  17. Development of the High-Order Decoupled Direct Method in Three Dimensions for Particulate Matter: Enabling Advanced Sensitivity Analysis in Air Quality Models

    EPA Science Inventory

    The high-order decoupled direct method in three dimensions for particular matter (HDDM-3D/PM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to enable advanced sensitivity analysis. The major effort of this work is to develop high-order DDM sensitivity...

  18. Mothers' and Fathers' Sensitivity with Their Two Children: A Longitudinal Study from Infancy to Early Childhood

    ERIC Educational Resources Information Center

    Hallers-Haalboom, Elizabeth T.; Groeneveld, Marleen G.; van Berkel, Sheila R.; Endendijk, Joyce J.; van der Pol, Lotte D.; Linting, Mariëlle; Bakermans-Kranenburg, Marian J.; Mesman, Judi

    2017-01-01

    To examine the effects of child age and birth order on sensitive parenting, 364 families with 2 children were visited when the second-born children were 12, 24, and 36 months old, and their older siblings were on average 2 years older. Mothers showed higher levels of sensitivity than fathers at all assessments. Parental sensitivity increased from…

  19. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  20. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  1. Three Studies on Configural Face Processing by Chimpanzees

    ERIC Educational Resources Information Center

    Parr, Lisa A.; Heintz, Matthew; Akamagwuna, Unoma

    2006-01-01

    Previous studies have demonstrated the sensitivity of chimpanzees to facial configurations. Three studies further these findings by showing this sensitivity to be specific to second-order relational properties. In humans, this type of configural processing requires prolonged experience and enables subordinate-level discriminations of many…

  2. Second-Order Factor Analysis as a Validity Assessment Tool: A Case Study Example Involving Perceptions of Stereotypic Love.

    ERIC Educational Resources Information Center

    Borrello, Gloria M.; Thompson, Bruce

    The calculation of second-order results in the validity assessment of measures and some useful interpretation aids are presented. First-order and second-order results give different and informative pictures of data dynamics. Several aspects of good practice in interpretation of second-order results are presented using data from 487 subjects…

  3. A Systematic Approach to Combustion Model Reduction and Lumping.

    DTIC Science & Technology

    1991-08-01

    very large values. In order to eliminate the unspecified threeshold on the sensitivities, Thomas and Bowes (1961) and Adler and Enig (1964) proposed...way to the stable node. The geometric definition of thermal runaway due to Adler and Enig (1964) and both sensitivity-based definitions by Boddington...where V denotes the time of the temperature maximum. Instead of looking for a positive second-order derivative before V ( Adler 1 and Enig, 1964

  4. Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.

  5. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Dynamics of neurons controlling movements of a locust hind leg. III. Extensor tibiae motor neurons.

    PubMed

    Newland, P L; Kondoh, Y

    1997-06-01

    Imposed movements of the apodeme of the femoral chordotonal organ (FeCO) of the locust hind leg elicit resistance reflexes in extensor and flexor tibiae motor neurons. The synaptic responses of the fast and slow extensor tibiae motor neurons (FETi and SETi, respectively) and the spike responses of SETi were analyzed with the use of the Wiener kernel white noise method to determine their response properties. The first-order Wiener kernels computed from soma recordings were essentially monophasic, or low passed, indicating that the motor neurons were primarily sensitive to the position of the tibia about the femorotibial joint. The responses of both extensor motor neurons had large nonlinear components. The second-order kernels of the synaptic responses of FETi and SETi had large on-diagonal peaks with two small off-diagonal valleys. That of SETi had an additional elongated valley on the diagonal, which was accompanied by two off-diagonal depolarizing peaks at a cutoff frequency of 58 Hz. These second-order components represent a half-wave rectification of the position-sensitive depolarizing response in FETi and SETi, and a delayed inhibitory input to SETi, indicating that both motor neurons were directionally sensitive. Model predictions of the responses of the motor neurons showed that the first-order (linear) characterization poorly predicted the actual responses of FETi and SETi to FeCO stimulation, whereas the addition of the second-order (nonlinear) term markedly improved the performance of the model. Simultaneous recordings from the soma and a neuropilar process of FETi showed that its synaptic responses to FeCO stimulation were phase delayed by about -30 degrees at 20 Hz, and reduced in amplitude by 30-40% when recorded in the soma. Similar configurations of the first and second-order kernels indicated that the primary process of FETi acted as a low-pass filter. Cross-correlation between a white noise stimulus and a unitized spike discharge of SETi again produced well-defined first- and second-order kernels that showed that the SETi spike response was also dependent on positional inputs. An elongated negative valley on the diagonal, characteristic of the second-order kernel of the synaptic response in SETi, was absent in the kernel from the spike component, suggesting that information is lost in the spike production process. The functional significance of these results is discussed in relation to the behavior of the locust.

  7. Efficient Simulation of Wing Modal Response: Application of 2nd Order Shape Sensitivities and Neural Networks

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Liu, Youhua

    2000-01-01

    At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.

  8. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  9. Investigation of Grating-Assisted Trimodal Interferometer Biosensors Based on a Polymer Platform.

    PubMed

    Liang, Yuxin; Zhao, Mingshan; Wu, Zhenlin; Morthier, Geert

    2018-05-10

    A grating-assisted trimodal interferometer biosensor is proposed and numerically analyzed. A long period grating coupler, for adjusting the power between the fundamental mode and the second higher order mode, is investigated, and is shown to act as a conventional directional coupler for adjusting the power between the two arms. The trimodal interferometer can achieve maximal fringe visibility when the powers of the two modes are adjusted to the same value by the grating coupler, which means that a better limit of detection can be expected. In addition, the second higher order mode typically has a larger evanescent tail than the first higher order mode in bimodal interferometers, resulting in a higher sensitivity of the trimodal interferometer. The influence of fabrication tolerances on the performance of the designed interferometer is also investigated. The power difference between the two modes shows inertia to the fill factor of the grating, but high sensitivity to the modulation depth. Finally, a 2050 2π/RIU (refractive index unit) sensitivity and 43 dB extinction ratio of the output power are achieved.

  10. Real-time fringe pattern demodulation with a second-order digital phase-locked loop.

    PubMed

    Gdeisat, M A; Burton, D R; Lalor, M J

    2000-10-10

    The use of a second-order digital phase-locked loop (DPLL) to demodulate fringe patterns is presented. The second-order DPLL has better tracking ability and more noise immunity than the first-order loop. Consequently, the second-order DPLL is capable of demodulating a wider range of fringe patterns than the first-order DPLL. A basic analysis of the first- and the second-order loops is given, and a performance comparison between the first- and the second-order DPLL's in analyzing fringe patterns is presented. The implementation of the second-order loop in real time on a commercial parallel image processing system is described. Fringe patterns are grabbed and processed, and the resultant phase maps are displayed concurrently.

  11. Extending high-order flux operators on spherical icosahedral grids and their application in a Shallow Water Model for transporting the Potential Vorticity

    NASA Astrophysics Data System (ADS)

    Zhang, Y.

    2017-12-01

    The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.

  12. Modal Response of Trapezoidal Wing Structures Using Second Order Shape Sensitivities

    NASA Technical Reports Server (NTRS)

    Liu, Youhua; Kapania, Rakesh K.

    2000-01-01

    The modal response of wing structures is very important for assessing their dynamic response including dynamic aeroelastic instabilities. Moreover, in a recent study an efficient structural optimization approach was developed using structural modes to represent the static aeroelastic wing response (both displacement and stress). In this paper, the modal response of general trapezoidal wing structures is approximated using shape sensitivities up to the 2nd order. Also different approaches of computing the derivatives are investigated.

  13. Double dissociation between first- and second-order processing.

    PubMed

    Allard, Rémy; Faubert, Jocelyn

    2007-04-01

    To study the difference of sensitivity to luminance- (LM) and contrast-modulated (CM) stimuli, we compared LM and CM detection thresholds in LM- and CM-noise conditions. The results showed a double dissociation (no or little inter-attribute interaction) between the processing of these stimuli, which implies that both stimuli must be processed, at least at some point, by separate mechanisms and that both stimuli are not merged after a rectification process. A second experiment showed that the internal equivalent noise limiting the CM sensitivity was greater than the one limiting the carrier sensitivity, which suggests that the internal noise occurring before the rectification process is not limiting the CM sensitivity. These results support the hypothesis that a suboptimal rectification process partially explains the difference of LM and CM sensitivity.

  14. Low-order aberration sensitivity of eighth-order coronagraph masks

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart B.; Green, Joseph J.

    2005-01-01

    In a recent paper, Kuchner, Crepp, and Ge describe new image-plane coronagraph mask designs that reject to eighth order the leakage of starlight caused by image motion at the mask, resulting in a substantial relaxation of image centroiding requirements compared to previous fourth-order and second-order masks. They also suggest that the new masks are effective at rejecting leakage caused by low-order aberrations (e.g., focus, coma, and astigmatism). In this paper, we derive the sensitivity of eighth-order masks to aberrations of any order and provide simulations of coronagraph behavior in the presence of optical aberrations.We find that the masks leak light as the fourth power of focus, astigmatism, coma, and trefoil. This has tremendous performance advantages for the Terrestrial Planet Finder Coronagraph.

  15. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Clinical evaluation and validation of laboratory methods for the diagnosis of Bordetella pertussis infection: Culture, polymerase chain reaction (PCR) and anti-pertussis toxin IgG serology (IgG-PT).

    PubMed

    Lee, Adria D; Cassiday, Pamela K; Pawloski, Lucia C; Tatti, Kathleen M; Martin, Monte D; Briere, Elizabeth C; Tondella, M Lucia; Martin, Stacey W

    2018-01-01

    The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2-4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods-pertussis culture as the "gold standard," composite reference standard analysis (CRS), and latent class analysis (LCA). Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis.

  17. Adolescent, but not adult, rats exhibit ethanol-mediated appetitive second-order conditioning

    PubMed Central

    Pautassi, Ricardo Marcos; Myers, Mallory; Spear, Linda Patia; Molina, Juan Carlos; Spear, Norman E.

    2008-01-01

    Background Adolescent rats are less sensitive to the sedative effects of ethanol than older animals. They also seem to perceive the reinforcing properties of ethanol. However, unlike neonates or infants, ethanol-mediated appetitive behavior has yet to be clearly shown in adolescents. Appetitive ethanol reinforcement was assessed in adolescent (postnatal day 33, P33) and adult rats (P71) through second-order conditioning (SOC). Methods On P32 or P70 animals were intragastrically administered ethanol (0.5 or 2.0 g/kg) paired with intraoral pulses of sucrose (CS1, first-order conditioning phase). CS1 delivery took place either 5-20 (Early pairing) or 30-45 (Late pairing) min following ethanol. CS1 exposure and ethanol administration were separated by 240 min in unpaired controls. On P33 or P71, animals were presented the CS1 (second-order conditioning phase) while in a distinctive chamber (CS2). Then, they were tested for CS2 preference. Results Early and late paired adolescents, but not adults, had greater preference for the CS2 than controls, a result indicative of ontogenetic variation in ethanol-mediated reinforcement. During the CS1 - CS2 associative phase, paired adolescents given 2.0 g/kg ethanol wall-climbed more than controls. Blood and brain ethanol levels associated with the 0.5 and 2.0 g/kg doses at the onset of each conditioning phase did not differ substantially across age, with mean BECs of 38 and 112 mg %. Conclusions These data indicate age-related differences between adolescent and adult rats in terms of sensitivity to ethanol’s motivational effects. Adolescents exhibit high sensitivity for ethanol’s appetitive effects. These animals also showed EtOH-mediated behavioral activation during the second-order conditioning phase. The SOC preparation provides a valuable conditioning model for assessing ethanol’s motivational effects across ontogeny. PMID:18782343

  18. Individual variability in behavioral flexibility predicts sign-tracking tendency

    PubMed Central

    Nasser, Helen M.; Chen, Yu-Wei; Fiscella, Kimberly; Calu, Donna J.

    2015-01-01

    Sign-tracking rats show heightened sensitivity to food- and drug-associated cues, which serve as strong incentives for driving reward seeking. We hypothesized that this enhanced incentive drive is accompanied by an inflexibility when incentive value changes. To examine this we tested rats in Pavlovian outcome devaluation or second-order conditioning prior to the assessment of sign-tracking tendency. To assess behavioral flexibility we trained rats to associate a light with a food outcome. After the food was devalued by pairing with illness, we measured conditioned responding (CR) to the light during an outcome devaluation probe test. The level of CR during outcome devaluation probe test correlated with the rats' subsequent tracking tendency, with sign-tracking rats failing to suppress CR to the light after outcome devaluation. To assess Pavlovian incentive learning, we trained rats on first-order (CS+, CS−) and second-order (SOCS+, SOCS−) discriminations. After second-order conditioning, we measured CR to the second-order cues during a probe test. Second-order conditioning was observed across all rats regardless of tracking tendency. The behavioral inflexibility of sign-trackers has potential relevance for understanding individual variation in vulnerability to drug addiction. PMID:26578917

  19. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    PubMed

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An Analysis of Second-Order Autoshaping

    ERIC Educational Resources Information Center

    Ward-Robinson, Jasper

    2004-01-01

    Three mechanisms can explain second-order conditioning: (1) The second-order conditioned stimulus (CS2) could activate a representation of the first-order conditioned stimulus (CS1), thereby provoking the conditioned response (CR); The CS2 could enter into an excitatory association with either (2) the representation governing the CR, or (3) with a…

  1. An optimal policy for deteriorating items with time-proportional deterioration rate and constant and time-dependent linear demand rate

    NASA Astrophysics Data System (ADS)

    Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu

    2017-12-01

    In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.

  2. Modeling of second-harmonic generation of circumferential guided wave propagation in a composite circular tube

    NASA Astrophysics Data System (ADS)

    Li, Mingliang; Deng, Mingxi; Gao, Guangjian; Xiang, Yanxun

    2018-05-01

    This paper investigated modeling of second-harmonic generation (SHG) of circumferential guided wave (CGW) propagation in a composite circular tube, and then analyzed the influences of interfacial properties on the SHG effect of primary CGW. Here the effect of SHG of primary CGW propagation is treated as a second-order perturbation to its linear wave response. Due to the convective nonlinearity and the inherent elastic nonlinearity of material, there are second-order bulk driving forces and surface/interface driving stresses in the interior and at the surface/interface of a composite circular tube, when a primary CGW mode propagates along its circumference. Based on the approach of modal expansion analysis for waveguide excitation, the said second-order driving forces/stresses are regarded as the excitation sources to generate a series of double-frequency CGW modes that constitute the second-harmonic field of the primary CGW propagation. It is found that the modal expansion coefficient of each double-frequency CGW mode is closely related to the interfacial stiffness constants that are used to describe the interfacial properties between the inner and outer circular parts of the composite tube. Furthermore, changes in the interfacial stiffness constants essentially influence the dispersion relation of CGW propagation. This will remarkably affect the efficiency of cumulative SHG of primary CGW propagation. Some finite element simulations have been implemented of response characteristics of cumulative SHG to the interfacial properties. Both the theoretical analyses and numerical simulations indicate that the effect of cumulative SHG is found to be much more sensitive to changes in the interfacial properties than primary CGW propagation. The potential of using the effect of cumulative SHG by primary CGW propagation to characterize a minor change in the interfacial properties is considered.

  3. Analysis of human knee osteoarthritic cartilage using polarization sensitive second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh; Grønhaug, Kirsten M.; Romijn, Elisabeth I.; Drogset, Jon O.; Lilledahl, Magnus B.

    2014-05-01

    Osteoarthritis is one of the most prevalent joint diseases in the world. Although the cause of osteoarthritis is not exactly clear, the disease results in a degradation of the quality of the articular cartilage including collagen and other extracellular matrix components. We have investigated alterations in the structure of collagen fibers in the cartilage tissue of the human knee using mulitphoton microscopy. Due to inherent high nonlinear susceptibility, ordered collagen fibers present in the cartilage tissue matrix produces strong second harmonic generation (SHG) signals. Significant morphological differences are found in different Osteoarthritic grades of cartilage by SHG microscopy. Based on the polarization analysis of the SHG signal, we find that a few locations of hyaline cartilage (mainly type II collagen) is being replaced by fibrocartilage (mainly type I cartilage), in agreement with earlier literature. To locate the different types and quantify the alteration in the structure of collagen fiber, we employ polarization-SHG microscopic analysis, also referred to as _-tensor imaging. The image analysis of p-SHG image obtained by excitation polarization measurements would represent different tissue constituents with different numerical values at pixel level resolution.

  4. Thou Shalt Be Reproducible! A Technology Perspective

    PubMed Central

    Mair, Patrick

    2016-01-01

    This article elaborates on reproducibility in psychology from a technological viewpoint. Modern open source computational environments are shown and explained that foster reproducibility throughout the whole research life cycle, and to which emerging psychology researchers should be sensitized, are shown and explained. First, data archiving platforms that make datasets publicly available are presented. Second, R is advocated as the data-analytic lingua franca in psychology for achieving reproducible statistical analysis. Third, dynamic report generation environments for writing reproducible manuscripts that integrate text, data analysis, and statistical outputs such as figures and tables in a single document are described. Supplementary materials are provided in order to get the reader started with these technologies. PMID:27471486

  5. Effects of Second-Order Sum- and Difference-Frequency Wave Forces on the Motion Response of a Tension-Leg Platform Considering the Set-down Motion

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Tang, Yougang; Li, Yan; Cai, Runbo

    2018-04-01

    This paper presents a study on the motion response of a tension-leg platform (TLP) under first- and second-order wave forces, including the mean-drift force, difference and sum-frequency forces. The second-order wave force is calculated using the full-field quadratic transfer function (QTF). The coupled effect of the horizontal motions, such as surge, sway and yaw motions, and the set-down motion are taken into consideration by the nonlinear restoring matrix. The time-domain analysis with 50-yr random sea state is performed. A comparison of the results of different case studies is made to assess the influence of second-order wave force on the motions of the platform. The analysis shows that the second-order wave force has a major impact on motions of the TLP. The second-order difference-frequency wave force has an obvious influence on the low-frequency motions of surge and sway, and also will induce a large set-down motion which is an important part of heave motion. Besides, the second-order sum-frequency force will induce a set of high-frequency motions of roll and pitch. However, little influence of second-order wave force is found on the yaw motion.

  6. Modeling tree growth and stable isotope ratios of white spruce in western Alaska.

    NASA Astrophysics Data System (ADS)

    Boucher, Etienne; Andreu-Hayles, Laia; Field, Robert; Oelkers, Rose; D'Arrigo, Rosanne

    2017-04-01

    Summer temperatures are assumed to exert a dominant control on physiological processes driving forest productivity in interior Alaska. However, despite the recent warming of the last few decades, numerous lines of evidence indicate that the enhancing effect of summer temperatures on high latitude forest populations has been weakening. First, satellite-derived indices of photosynthetic activity, such as the Normalized-Difference Vegetation Index (NDVI, 1982-2005), show overall declines in productivity in the interior boreal forests. Second, some white spruce tree ring series strongly diverge from summer temperatures during the second half of the 20th century, indicating a persistent loss of temperature sensitivity of tree ring proxies. Thus, the physiological response of treeline forests to ongoing climate change cannot be accurately predicted, especially from correlation analysis. Here, we make use of a process-based dendroecological model (MAIDENiso) to elucidate the complex linkages between global warming and increases in atmospheric CO2 concentration [CO2] with the response of treeline white spruce stands in interior Alaska (Seward). In order to fully capture the array of processes controlling tree growth in the area, multiple physiological indicators of white spruce productivity are used as target variables: NDVI images, ring widths (RW), maximum density (MXD) and newly measured carbon and oxygen stable isotope ratios from ring cellulose. Based on these data, we highlight the processes and mechanisms responsible for the apparent loss of sensitivity of white spruce trees to recent climate warming and [CO2] increase in order to elucidate the sensitivity and vulnerability of these trees to climate change.

  7. Whispering gallery mode resonators based on radiation-sensitive materials

    NASA Technical Reports Server (NTRS)

    Savchenkov, Anatoliy (Inventor); Maleki, Lutfollah (Inventor); Ilchenko, Vladimir (Inventor); Handley, Timothy A. (Inventor)

    2005-01-01

    Whispering gallery mode (WGM) optical resonators formed of radiation-sensitive materials to allow for permanent tuning of their resonance frequencies in a controlled manner. Two WGM resonators may be cascaded to form a composite filter to produce a second order filter function where at least one WGM resonator is formed a radiation-sensitive material to allow for proper control in the overlap of the two filter functions.

  8. Emergence of band-pass filtering through adaptive spiking in the owl's cochlear nucleus

    PubMed Central

    MacLeod, Katrina M.; Lubejko, Susan T.; Steinberg, Louisa J.; Köppl, Christine; Peña, Jose L.

    2014-01-01

    In the visual, auditory, and electrosensory modalities, stimuli are defined by first- and second-order attributes. The fast time-pressure signal of a sound, a first-order attribute, is important, for instance, in sound localization and pitch perception, while its slow amplitude-modulated envelope, a second-order attribute, can be used for sound recognition. Ascending the auditory pathway from ear to midbrain, neurons increasingly show a preference for the envelope and are most sensitive to particular envelope modulation frequencies, a tuning considered important for encoding sound identity. The level at which this tuning property emerges along the pathway varies across species, and the mechanism of how this occurs is a matter of debate. In this paper, we target the transition between auditory nerve fibers and the cochlear nucleus angularis (NA). While the owl's auditory nerve fibers simultaneously encode the fast and slow attributes of a sound, one synapse further, NA neurons encode the envelope more efficiently than the auditory nerve. Using in vivo and in vitro electrophysiology and computational analysis, we show that a single-cell mechanism inducing spike threshold adaptation can explain the difference in neural filtering between the two areas. We show that spike threshold adaptation can explain the increased selectivity to modulation frequency, as input level increases in NA. These results demonstrate that a spike generation nonlinearity can modulate the tuning to second-order stimulus features, without invoking network or synaptic mechanisms. PMID:24790170

  9. A cavity radiometer for Earth albedo measurement, phase 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Radiometric measurements of the directional albedo of the Earth requires a detector with a flat response from 0.2 to 50 microns, a response time of about 2 seconds, a sensitivity of the order of 0.02 mw/sq cm, and a measurement uncertainty of less than 5 percent. Absolute cavity radiometers easily meet the spectral response and accuracy requirements for Earth albedo measurements, but the radiometers available today lack the necessary sensitivity and response time. The specific innovations addressed were the development of a very low thermal mass cavity and printed/deposited thermocouple sensing elements which were incorporated into the radiometer design to produce a sensitive, fast response, absolute radiometer. The cavity is applicable to the measurement of the reflected and radiated fluxes from the Earth surface and lower atmosphere from low Earth orbit satellites. The effort consisted of requirements and thermal analysis; design, construction, and test of prototype elements of the black cavity and sensor elements to show proof-of-concept. The results obtained indicate that a black body cavity sensor that has inherently a flat response from 0.2 to 50 microns can be produced which has a sensitivity of at least 0.02 mw/sq cm per micro volt ouput and with a time constant of less than two seconds. Additional work is required to develop the required thermopile.

  10. Influence of second-order bracket-archwire misalignments on loads generated during third-order archwire rotation in orthodontic treatment.

    PubMed

    Romanyk, Dan L; George, Andrew; Li, Yin; Heo, Giseon; Carey, Jason P; Major, Paul W

    2016-05-01

    To investigate the influence of a rotational second-order bracket-archwire misalignment on the loads generated during third-order torque procedures. Specifically, torque in the second- and third-order directions was considered. An orthodontic torque simulator (OTS) was used to simulate the third-order torque between Damon Q brackets and 0.019 × 0.025-inch stainless steel archwires. Second-order misalignments were introduced in 0.5° increments from a neutral position, 0.0°, up to 3.0° of misalignment. A sample size of 30 brackets was used for each misalignment. The archwire was then rotated in the OTS from its neutral position up to 30° in 3° increments and then unloaded in the same increments. At each position, all forces and torques were recorded. Repeated-measures analysis of variance was used to determine if the second-order misalignments significantly affected torque values in the second- and third-order directions. From statistical analysis of the experimental data, it was found that the only statistically significant differences in third-order torque between a misaligned state and the neutral position occurred for 2.5° and 3.0° of misalignment, with mean differences of 2.54 Nmm and 2.33 Nmm, respectively. In addition, in pairwise comparisons of second-order torque for each misalignment increment, statistical differences were observed in all comparisons except for 0.0° vs 0.5° and 1.5° vs 2.0°. The introduction of a second-order misalignment during third-order torque simulation resulted in statistically significant differences in both second- and third-order torque response; however, the former is arguably clinically insignificant.

  11. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  12. Comment on “Two statistics for evaluating parameter identifiability and error reduction” by John Doherty and Randall J. Hunt

    USGS Publications Warehouse

    Hill, Mary C.

    2010-01-01

    Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.

  13. Textural content in 3T MR: an image-based marker for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, S. V.; Mullick, Rakesh; Patil, Uday

    2005-04-01

    In this paper, we propose a study, which investigates the first-order and second-order distributions of T2 images from a magnetic resonance (MR) scan for an age-matched data set of 24 Alzheimer's disease and 17 normal patients. The study is motivated by the desire to analyze the brain iron uptake in the hippocampus of Alzheimer's patients, which is captured by low T2 values. Since, excess iron deposition occurs locally in certain regions of the brain, we are motivated to investigate the spatial distribution of T2, which is captured by higher-order statistics. Based on the first-order and second-order distributions (involving gray level co-occurrence matrix) of T2, we show that the second-order statistics provide features with sensitivity >90% (at 80% specificity), which in turn capture the textural content in T2 data. Hence, we argue that different texture characteristics of T2 in the hippocampus for Alzheimer's and normal patients could be used as an early indicator of Alzheimer's disease.

  14. Performance of the STIS CCD Dark Rate Temperature Correction

    NASA Astrophysics Data System (ADS)

    Branton, Doug; STScI STIS Team

    2018-06-01

    Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.

  15. Nonlinear vibrational spectroscopy of surfactants at liquid interfaces

    NASA Astrophysics Data System (ADS)

    Miranda, Paulo Barbeitas

    Surfactants are widely used to modify physical and chemical properties of interfaces. They play an important role in many technological problems. Surfactant monolayers are also of great scientific interest because they are two-dimensional systems that may exhibit a very rich phase transition behavior and can also be considered as a model system for biological interfaces. In this Thesis, we use a second-order nonlinear optical technique (Sum-Frequency Generation - SFG) to obtain vibrational spectra of surfactant monolayers at liquid/vapor and solid/liquid interfaces. The technique has several advantages: it is intrinsically surface-specific, can be applied to buried interfaces, has submonolayer sensitivity and is remarkably sensitive to the conformational order of surfactant monolayers. The first part of the Thesis is concerned with surfactant monolayers at the air/water interface (Langmuir films). Surface crystallization of an alcohol Langmuir film and of liquid alkanes are studied and their phase transition behaviors are found to be of different nature, although driven by similar intermolecular interactions. The effect of crystalline order of Langmuir monolayers on the interfacial water structure is also investigated. It is shown that water forms a well-ordered hydrogen-bonded network underneath an alcohol monolayer, in contrast to a fatty acid monolayer which induces a more disordered structure. In the latter case, ionization of the monolayer becomes more significant with increase of the water pH value, leading to an electric-field-induced ordering of interfacial water molecules. We also show that the orientation and conformation of fairly complicated molecules in a Langmuir monolayer can be completely mapped out using a combination of SFG and second harmonic generation (SHG). For a quantitative analysis of molecular orientation at an interface, local-field corrections must be included. The second part is a study of self-assembled surfactant monolayers at the solid/liquid interface. It is shown that the conformation of a monolayer adsorbed onto a solid substrate and immersed in a liquid is highly dependent on the monolayer surface density and on the nature of intermolecular interactions in the liquid. Fully packed monolayers are well ordered in any environment due to strong surfactant-surfactant interactions and limited liquid penetration into the monolayer. In contrast, loosely packed monolayers are very sensitive to the liquid environment. Non-polar liquids cause a mild increase in the surfactant conformational disorder. Polar liquids induce more disorder and hydrogen-bonding liquids produce highly disordered conformations due to the hydrophobic effect. When immersed in alkanes, under certain conditions the surfactant chains may become highly ordered due to their interaction with the liquid molecules (chain-chain interaction). In the case of long-chain alcohols, competition between the hydrophobic effect and chain-chain interaction is observed.

  16. Among overweight middle-aged men, first-borns have lower insulin sensitivity than second-borns

    PubMed Central

    Albert, Benjamin B.; de Bock, Martin; Derraik, José G. B.; Brennan, Christine M.; Biggs, Janene B.; Hofman, Paul L.; Cutfield, Wayne S.

    2014-01-01

    We aimed to assess whether birth order affects metabolism and body composition in overweight middle-aged men. We studied 50 men aged 45.6 ± 5.5 years, who were overweight (BMI 27.5 ± 1.7 kg/m2) but otherwise healthy in Auckland, New Zealand. These included 26 first-borns and 24 second-borns. Insulin sensitivity was assessed by the Matsuda method from an oral glucose tolerance test. Other assessments included DXA-derived body composition, lipid profiles, 24-hour ambulatory blood pressure, and carotid intima-media thickness. First-born men were 6.9 kg heavier (p = 0.013) and had greater BMI (29.1 vs 27.5 kg/m2; p = 0.004) than second-borns. Insulin sensitivity in first-born men was 33% lower than in second-borns (4.38 vs 6.51; p = 0.014), despite adjustment for fat mass. There were no significant differences in ambulatory blood pressure, lipid profile or carotid intima-media thickness between first- and second-borns. Thus, first-born adults may be at a greater risk of metabolic and cardiovascular diseases. PMID:24503677

  17. An alternative assessment of second-order closure models in turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Speziale, Charles G.; Gatski, Thomas B.

    1994-01-01

    The performance of three recently proposed second-order closure models is tested in benchmark turbulent shear flows. Both homogeneous shear flow and the log-layer of an equilibrium turbulent boundary layer are considered for this purpose. An objective analysis of the results leads to an assessment of these models that stands in contrast to that recently published by other authors. A variety of pitfalls in the formulation and testing of second-order closure models are uncovered by this analysis.

  18. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  19. Apomorphine conditioning and sensitization: the paired/unpaired treatment order as a new major determinant of drug conditioned and sensitization effects.

    PubMed

    de Matos, Liana Wermelinger; Carey, Robert J; Carrera, Marinete Pinheiro

    2010-09-01

    Repeated treatments with psychostimulant drugs generate behavioral sensitization. In the present study we employed a paired/unpaired protocol to assess the effects of repeated apomorphine (2.0 mg/kg) treatments upon locomotion behavior. In the first experiment we assessed the effects of conditioning upon apomorphine sensitization. Neither the extinction of the conditioned response nor a counter-conditioning procedure in which we paired an inhibitory treatment (apomorphine 0.05 mg/kg) with the previously established conditioned stimulus modified the sensitization response. In the second experiment, we administered the paired/unpaired protocol in two phases. In the second phase, we reversed the paired/unpaired protocol. Following the first phase, the paired group alone exhibited conditioned locomotion in the vehicle test and a sensitization response. In the second phase, the initial unpaired group which received 5 paired apomorphine trials during the reversal phase did not develop a conditioned response but developed a potentiated sensitization response. This disassociation of the conditioned response from the sensitization response is attributed to an apomorphine anti-habituation effect that can generate a false positive Pavlovian conditioned response effect. The potentiated sensitization response induced by the treatment reversal protocol points to an important role for the sequential experience of the paired/unpaired protocol in behavioral sensitization. 2010 Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, H.; Fullwood, R.; Glancy, J.

    This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less

  1. Retrieval of complex χ(2) parts for quantitative analysis of sum-frequency generation intensity spectra

    PubMed Central

    Hofmann, Matthias J.; Koelsch, Patrick

    2015-01-01

    Vibrational sum-frequency generation (SFG) spectroscopy has become an established technique for in situ surface analysis. While spectral recording procedures and hardware have been optimized, unique data analysis routines have yet to be established. The SFG intensity is related to probing geometries and properties of the system under investigation such as the absolute square of the second-order susceptibility χ(2)2. A conventional SFG intensity measurement does not grant access to the complex parts of χ(2) unless further assumptions have been made. It is therefore difficult, sometimes impossible, to establish a unique fitting solution for SFG intensity spectra. Recently, interferometric phase-sensitive SFG or heterodyne detection methods have been introduced to measure real and imaginary parts of χ(2) experimentally. Here, we demonstrate that iterative phase-matching between complex spectra retrieved from maximum entropy method analysis and fitting of intensity SFG spectra (iMEMfit) leads to a unique solution for the complex parts of χ(2) and enables quantitative analysis of SFG intensity spectra. A comparison between complex parts retrieved by iMEMfit applied to intensity spectra and phase sensitive experimental data shows excellent agreement between the two methods. PMID:26450297

  2. Optimization of shallow arches against instability using sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Kamat, Manohar P.

    1987-01-01

    The author discusses the problem of optimization of shallow frame structures which involve a coupling of axial and bending responses. A shallow arch of a given shape and of given weight is optimized such that its limit point load is maximized. The cross-sectional area, A(x) and the moment of inertia, I(x) of the arch obey the relationship I(x) = rho A(x) sup n, n = 1,2,3 and rho is a specified constant. Analysis of the arch for its limit point calculation involves a geometric nonlinear analysis which is performed using a corotational formulation. The optimization is carried out using a second-order projected Lagrangian algorithm and the sensitivity derivatives of the critical load parameter with respect to the areas of the finite elements of the arch are calculated using implicit differentation. Results are presented for an arch of a specified rise to span ratio under two different loadings and the limitations of the approach for the intermediate rise arches are addressed.

  3. New ion trap for atomic frequency standard applications

    NASA Technical Reports Server (NTRS)

    Prestage, J. D.; Dick, G. J.; Maleki, L.

    1989-01-01

    A novel linear ion trap that permits storage of a large number of ions with reduced susceptibility to the second-order Doppler effect caused by the radio frequency (RF) confining fields has been designed and built. This new trap should store about 20 times the number of ions a conventional RF trap stores with no corresponding increase in second-order Doppler shift from the confining field. In addition, the sensitivity of this shift to trapping parameters, i.e., RF voltage, RF frequency, and trap size, is greatly reduced.

  4. Pharmacoeconomic analysis of recombinant factor VIIa versus APCC in the treatment of minor-to-moderate bleeds in hemophilia patients with inhibitors.

    PubMed

    Joshi, Ashish V; Stephens, Jennifer M; Munro, Vicki; Mathew, Prasad; Botteman, Marc F

    2006-01-01

    To compare the cost-effectiveness of three treatment regimens using recombinant activated Factor VII (rFVIIa), NovoSeven, and activated prothrombin-complex concentrate (APCC), FEIBA VH, for home treatment of minor-to-moderate bleeds in hemophilia patients with inhibitors. A literature-based, decision-analytic model was developed to compare three treatment regimens. The regimens consisting of first-, second-, and third-line treatments were: rFVIIa-rFVIIa-rFVIIa; APCC-rFVIIa-rFVIIa; and APCC-APCC-rFVIIa. Patients not responding to first-line treatment were administered second-line treatment, and those failing second-line received third-line treatment. Using literature and expert opinion, the model structure and base-case inputs were adapted to the US from a previously published analysis. The percentage of evaluable bleeds controlled with rFVIIa and APCC were obtained from published literature. Drug costs (2005 US$) based on average wholesale price were included in the base-case model. Univariate and probabilistic sensitivity analyses (second-order Monte Carlo simulation) were conducted by varying the efficacy, re-bleeding rates, patient weight, and dosing to ascertain robustness of the model. In the base-case analysis, the average cost per resolved bleed using rFVIIa as first-, second-, and third-line treatment was $28 076. Using APCC as first-line and rFVIIa as second- and third-line treatment resulted in an average cost per resolved bleed of $30 883, whereas the regimen using APCC as first- and second-line, and rFVIIa as third-line treatment was the most expensive, with an average cost per resolved bleed of $32 150. Cost offsets occurred for the rFVIIa-only regimen through avoidance of second and third lines of treatment. In probabilistic sensitivity analyses, the rFVIIa-only strategy was the least expensive strategy more than 68% of the time. The management of minor-to-moderate bleeds extends beyond the initial line of treatment, and should include the economic impact of re-bleeding and failures over multiple lines of treatment. In the majority of cases, the rFVIIa-only regimen appears to be a less expensive treatment option in inhibitor patients with minor-to-moderate bleeds over three lines of treatment.

  5. The cross-cut statistic and its sensitivity to bias in observational studies with ordered doses of treatment.

    PubMed

    Rosenbaum, Paul R

    2016-03-01

    A common practice with ordered doses of treatment and ordered responses, perhaps recorded in a contingency table with ordered rows and columns, is to cut or remove a cross from the table, leaving the outer corners--that is, the high-versus-low dose, high-versus-low response corners--and from these corners to compute a risk or odds ratio. This little remarked but common practice seems to be motivated by the oldest and most familiar method of sensitivity analysis in observational studies, proposed by Cornfield et al. (1959), which says that to explain a population risk ratio purely as bias from an unobserved binary covariate, the prevalence ratio of the covariate must exceed the risk ratio. Quite often, the largest risk ratio, hence the one least sensitive to bias by this standard, is derived from the corners of the ordered table with the central cross removed. Obviously, the corners use only a portion of the data, so a focus on the corners has consequences for the standard error as well as for bias, but sampling variability was not a consideration in this early and familiar form of sensitivity analysis, where point estimates replaced population parameters. Here, this cross-cut analysis is examined with the aid of design sensitivity and the power of a sensitivity analysis. © 2015, The International Biometric Society.

  6. Estimation of groundwater recharge parameters by time series analysis

    USGS Publications Warehouse

    Naff, Richard L.; Gutjahr, Allan L.

    1983-01-01

    A model is proposed that relates water level fluctuations in a Dupuit aquifer to effective precipitaton at the top of the unsaturated zone. Effective precipitation, defined herein as that portion of precipitation which becomes recharge, is related to precipitation measured in a nearby gage by a two-parameter function. A second-order stationary assumption is used to connect the spectra of effective precipitation and water level fluctuations. Measured precipitation is assumed to be Gaussian, in order to develop a transfer function that relates the spectra of measured and effective precipitation. A nonlinear least squares technique is proposed for estimating parameters of the effective-precipitation function. Although sensitivity analyses indicate difficulties that may be encountered in the estimation procedure, the methods developed did yield convergent estimates for two case studies.

  7. Transitioning the second-generation antihistamines to over-the-counter status: a cost-effectiveness analysis.

    PubMed

    Sullivan, Patrick W; Follin, Sheryl L; Nichol, Michael B

    2003-12-01

    A U.S. Food and Drug Administration advisory committee deemed the second-generation antihistamines (SGA) safe for over-the-counter use against the preliminary opposition of the manufacturers. As a result, loratadine is now available over-the-counter. First-generation antihistamines (FGA) are associated with an increased risk of unintentional injuries, fatalities, and reduced productivity. Access to SGA over-the-counter could result in decreased use of FGA, thereby reducing deleterious outcomes. The societal impact of transitioning this class of medications from prescription to over-the-counter status has important policy implications. To examine the cost-effectiveness of transitioning SGA to over-the-counter status from a societal perspective. A simulation model of the decision to transition SGA to over-the-counter status was compared with retaining prescription-only status for a hypothetical cohort of individuals with allergic rhinitis in the United States. Estimates of costs and effectiveness were obtained from the medical literature and national surveys. Sensitivity analysis was performed using a second-order Monte Carlo simulation. Discounted, quality-adjusted life-years saved as a result of amelioration of allergic rhinitis symptoms and avoidance of motor vehicle, occupational, public and home injuries and fatalities; discounted direct and indirect costs. Availability of SGA over-the-counter was associated with annual savings of 4 billion dollars (2.4-5.3 billion dollars) or 100 dollars (64-137 dollars) per allergic rhinitis sufferer and 135,061 time-discounted quality-adjusted life years (84,913-191,802). The sensitivity analysis provides evidence in support of these results. Making SGA available over-the-counter is both cost-saving and more effective for society, largely as a result of reduced adverse outcomes associated with FGA-induced sedation. Further study is needed to determine the differential impact on specific vulnerable populations.

  8. Autonomy-connectedness and internalizing-externalizing personality psychopathology, among outpatients.

    PubMed

    Bachrach, Nathan; Bekker, Marrie H J; Croon, Marcel A

    2013-07-01

    The aims of this research were to investigate gender differences in levels of autonomy-connectedness, Axis I Psychopathology, and higher order factors of internalizing and externalizing personality psychopathology and, second, to investigate the association between these variables. The design of this research is cross-sectional and multicentered. We used self-report questionnaires, factor analysis, and regression analysis. We found evidence for a significant role of autonomy-connectedness in Axis I Psychopathology. This was especially true for women, who were found to be more sensitive to others and sensitivity to others was strongly associated with Axis I Psychopathology. Maybe due to the research sample no evidence was found for an association of autonomy-connectedness with externalizing psychopathology. As to the role of autonomy-connectedness in internalizing psychopathology, we found that a lack of self-awareness or a capacity of managing new situations, combined with a high sensitivity to others, were associated with internalizing psychopathology. Women appeared to be more sensitive to others and to report higher levels of Axis I Psychopathology than men. We conclude that autonomy-connectedness plays an important role in Axis I Psychopathology as well as in internalizing Axis II pathology. Treatment of Axis I and internalizing Axis II psychopathology should therefore also focus on autonomy problems. © 2012 Wiley Periodicals, Inc.

  9. Effect of mesh distortion on the accuracy of transverse shear stresses and their sensitivity coefficients in multilayered composites

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Yong H.

    1995-01-01

    A study is made of the effect of mesh distortion on the accuracy of transverse shear stresses and their first-order and second-order sensitivity coefficients in multilayered composite panels subjected to mechanical and thermal loads. The panels are discretized by using a two-field degenerate solid element, with the fundamental unknowns consisting of both displacement and strain components, and the displacement components having a linear variation throughout the thickness of the laminate. A two-step computational procedure is used for evaluating the transverse shear stresses. In the first step, the in-plane stresses in the different layers are calculated at the numerical quadrature points for each element. In the second step, the transverse shear stresses are evaluated by using piecewise integration, in the thickness direction, of the three-dimensional equilibrium equations. The same procedure is used for evaluating the sensitivity coefficients of transverse shear stresses. Numerical results are presented showing no noticeable degradation in the accuracy of the in-plane stresses and their sensitivity coefficients with mesh distortion. However, such degradation is observed for the transverse shear stresses and their sensitivity coefficients. The standard of comparison is taken to be the exact solution of the three-dimensional thermoelasticity equations of the panel.

  10. Analysis of first and second order binary quantized digital phase-locked loops for ideal and white Gaussian noise inputs

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1980-01-01

    Specific configurations of first and second order all digital phase locked loops are analyzed for both ideal and additive white gaussian noise inputs. In addition, a design for a hardware digital phase locked loop capable of either first or second order operation is presented along with appropriate experimental data obtained from testing of the hardware loop. All parameters chosen for the analysis and the design of the digital phase locked loop are consistent with an application to an Omega navigation receiver although neither the analysis nor the design are limited to this application.

  11. The effect of presentation rate on implicit sequence learning in aging.

    PubMed

    Foster, Chris M; Giovanello, Kelly S

    2017-02-01

    Implicit sequence learning is thought to be preserved in aging when the to-be learned associations are first-order; however, when associations are second-order, older adults (OAs) tend to experience deficits as compared to young adults (YAs). Two experiments were conducted using a first (Experiment 1) and second-order (Experiment 2) serial-reaction time task. Stimuli were presented at a constant rate of either 800 milliseconds (fast) or 1200 milliseconds (slow). Results indicate that both age groups learned first-order dependencies equally in both conditions. OAs and YAs also learned second-order dependencies, but the learning of lag-2 information was significantly impacted by the rate of presentation for both groups. OAs showed significant lag-2 learning in slow condition while YAs showed significant lag-2 learning in the fast condition. The sensitivity of implicit sequence learning to the rate of presentation supports the idea that OAs and YAs different processing speeds impact the ability to build complex associations across time and intervening events.

  12. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  13. Higher-harmonic collective modes in a trapped gas from second-order hydrodynamics

    DOE PAGES

    Lewis, William E.; Romatschke, P.

    2017-02-21

    Utilizing a second-order hydrodynamics formalism, the dispersion relations for the frequencies and damping rates of collective oscillations as well as spatial structure of these modes up to the decapole oscillation in both two- and three- dimensional gas geometries are calculated. In addition to higher-order modes, the formalism also gives rise to purely damped "non-hydrodynamic" modes. We calculate the amplitude of the various modes for both symmetric and asymmetric trap quenches, finding excellent agreement with an exact quantum mechanical calculation. Furthermore, we find that higher-order hydrodynamic modes are more sensitive to the value of shear viscosity, which may be of interestmore » for the precision extraction of transport coefficients in Fermi gas systems.« less

  14. Higher-harmonic collective modes in a trapped gas from second-order hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, William E.; Romatschke, P.

    Utilizing a second-order hydrodynamics formalism, the dispersion relations for the frequencies and damping rates of collective oscillations as well as spatial structure of these modes up to the decapole oscillation in both two- and three- dimensional gas geometries are calculated. In addition to higher-order modes, the formalism also gives rise to purely damped "non-hydrodynamic" modes. We calculate the amplitude of the various modes for both symmetric and asymmetric trap quenches, finding excellent agreement with an exact quantum mechanical calculation. Furthermore, we find that higher-order hydrodynamic modes are more sensitive to the value of shear viscosity, which may be of interestmore » for the precision extraction of transport coefficients in Fermi gas systems.« less

  15. Pseudo-second order models for the adsorption of safranin onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth

    2007-04-02

    Kinetic experiments were carried out for the sorption of safranin onto activated carbon particles. The kinetic data were fitted to pseudo-second order model of Ho, Sobkowsk and Czerwinski, Blanchard et al. and Ritchie by linear and non-linear regression methods. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo-second order models were the same. Non-linear regression analysis showed that both Blanchard et al. and Ho have similar ideas on the pseudo-second order model but with different assumptions. The best fit of experimental data in Ho's pseudo-second order expression by linear and non-linear regression method showed that Ho pseudo-second order model was a better kinetic expression when compared to other pseudo-second order kinetic expressions.

  16. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  17. Sensitivity analysis of ozone formation and transport for a central California air pollution episode.

    PubMed

    Jin, Ling; Tonse, Shaheen; Cohan, Daniel S; Mao, Xiaoling; Harley, Robert A; Brown, Nancy J

    2008-05-15

    We developed a first- and second-order sensitivity analysis approach with the decoupled direct method to examine spatial and temporal variations of ozone-limiting reagents and the importance of local vs upwind emission sources in the San Joaquin Valley of central California for a 5 day ozone episode (Jul 29th to Aug 3rd, 2000). Despite considerable spatial variations, nitrogen oxides (NO(x)) emission reductions are overall more effective than volatile organic compound (VOC) control for attaining the 8 h ozone standard in this region for this episode, in contrast to the VOC control that works better for attaining the prior 1 h ozone standard. Interbasin source contributions of NO(x) emissions are limited to the northern part of the SJV, while anthropogenic VOC (AVOC) emissions, especially those emitted at night, influence ozone formation in the SJV further downwind. Among model input parameters studied here, uncertainties in emissions of NO(x) and AVOC, and the rate coefficient of the OH + NO2 termination reaction, have the greatest effect on first-order ozone responses to changes in NO(x) emissions. Uncertainties in biogenic VOC emissions only have a modest effect because they are generally not collocated with anthropogenic sources in this region.

  18. Analysis of higher order harmonics with holographic reflection gratings

    NASA Astrophysics Data System (ADS)

    Mas-Abellan, P.; Madrigal, R.; Fimia, A.

    2017-05-01

    Silver halide emulsions have been considered one of the most energetic sensitive materials for holographic applications. Nonlinear recording effects on holographic reflection gratings recorded on silver halide emulsions have been studied by different authors obtaining excellent experimental results. In this communication specifically we focused our investigation on the effects of refractive index modulation, trying to get high levels of overmodulation that will produce high order harmonics. We studied the influence of the overmodulation and its effects on the transmission spectra for a wide exposure range by use of 9 μm thickness films of ultrafine grain emulsion BB640, exposed to single collimated beams using a red He-Ne laser (wavelength 632.8 nm) with Denisyuk configuration obtaining a spatial frequency of 4990 l/mm recorded on the emulsion. The experimental results show that high overmodulation levels of refractive index produce second order harmonics with high diffraction efficiency (higher than 75%) and a narrow grating bandwidth (12.5 nm). Results also show that overmodulation produce diffraction spectra deformation of the second order harmonic, transforming the spectrum from sinusoidal to approximation of square shape due to very high overmodulation. Increasing the levels of overmodulation of refractive index, we have obtained higher order harmonics, obtaining third order harmonic with diffraction efficiency (up to 23%) and narrowing grating bandwidth (5 nm). This study is the first step to develop a new easy technique to obtain narrow spectral filters based on the use of high index modulation reflection gratings.

  19. Lorentz-symmetry test at Planck-scale suppression with nucleons in a spin-polarized 133Cs cold atom clock

    NASA Astrophysics Data System (ADS)

    Pihan-Le Bars, H.; Guerlin, C.; Lasseri, R.-D.; Ebran, J.-P.; Bailey, Q. G.; Bize, S.; Khan, E.; Wolf, P.

    2017-04-01

    We introduce an improved model that links the frequency shift of the 133Cs hyperfine Zeeman transitions |F =3 ,mF ⟩↔|F =4 ,mF ⟩ to the Lorentz-violating Standard Model extension (SME) coefficients of the proton and neutron. The new model uses Lorentz transformations developed to second order in boost and additionally takes the nuclear structure into account, beyond the simple Schmidt model used previously in Standard Model extension analyses, thereby providing access to both proton and neutron SME coefficients including the isotropic coefficient c˜T T. Using this new model in a second analysis of the data delivered by the FO2 dual Cs/Rb fountain at Paris Observatory and previously analyzed in [1], we improve by up to 13 orders of magnitude the present maximum sensitivities for laboratory tests [2] on the c˜Q, c˜T J, and c˜T T coefficients for the neutron and on the c˜Q coefficient for the proton, reaching respectively 10-20, 10-17, 10-13, and 10-15 GeV .

  20. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.

  1. Enhanced sensitivity at higher-order exceptional points

    NASA Astrophysics Data System (ADS)

    Hodaei, Hossein; Hassan, Absar U.; Wittek, Steffen; Garcia-Gracia, Hipolito; El-Ganainy, Ramy; Christodoulides, Demetrios N.; Khajavikhan, Mercedeh

    2017-08-01

    Non-Hermitian degeneracies, also known as exceptional points, have recently emerged as a new way to engineer the response of open physical systems, that is, those that interact with the environment. They correspond to points in parameter space at which the eigenvalues of the underlying system and the corresponding eigenvectors simultaneously coalesce. In optics, the abrupt nature of the phase transitions that are encountered around exceptional points has been shown to lead to many intriguing phenomena, such as loss-induced transparency, unidirectional invisibility, band merging, topological chirality and laser mode selectivity. Recently, it has been shown that the bifurcation properties of second-order non-Hermitian degeneracies can provide a means of enhancing the sensitivity (frequency shifts) of resonant optical structures to external perturbations. Of particular interest is the use of even higher-order exceptional points (greater than second order), which in principle could further amplify the effect of perturbations, leading to even greater sensitivity. Although a growing number of theoretical studies have been devoted to such higher-order degeneracies, their experimental demonstration in the optical domain has so far remained elusive. Here we report the observation of higher-order exceptional points in a coupled cavity arrangement—specifically, a ternary, parity-time-symmetric photonic laser molecule—with a carefully tailored gain-loss distribution. We study the system in the spectral domain and find that the frequency response associated with this system follows a cube-root dependence on induced perturbations in the refractive index. Our work paves the way for utilizing non-Hermitian degeneracies in fields including photonics, optomechanics, microwaves and atomic physics.

  2. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  3. Vibration isolation technology: Sensitivity of selected classes of space experiments to residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Zhang, Y. Q.; Adebiyi, Adebimpe

    1989-01-01

    Progress performed on each task is described. Order of magnitude analyses related to liquid zone sensitivity and thermo-capillary flow sensitivity are covered. Progress with numerical models of the sensitivity of isothermal liquid zones is described. Progress towards a numerical model of coupled buoyancy-driven and thermo-capillary convection experiments is also described. Interaction with NASA personnel is covered. Results to date are summarized and they are discussed in terms of the predicted space station acceleration environment. Work planned for the second year is also discussed.

  4. Exploring sensitivity of a multistate occupancy model to inform management decisions

    USGS Publications Warehouse

    Green, A.W.; Bailey, L.L.; Nichols, J.D.

    2011-01-01

    Dynamic occupancy models are often used to investigate questions regarding the processes that influence patch occupancy and are prominent in the fields of population and community ecology and conservation biology. Recently, multistate occupancy models have been developed to investigate dynamic systems involving more than one occupied state, including reproductive states, relative abundance states and joint habitat-occupancy states. Here we investigate the sensitivities of the equilibrium-state distribution of multistate occupancy models to changes in transition rates. We develop equilibrium occupancy expressions and their associated sensitivity metrics for dynamic multistate occupancy models. To illustrate our approach, we use two examples that represent common multistate occupancy systems. The first example involves a three-state dynamic model involving occupied states with and without successful reproduction (California spotted owl Strix occidentalis occidentalis), and the second involves a novel way of using a multistate occupancy approach to accommodate second-order Markov processes (wood frog Lithobates sylvatica breeding and metamorphosis). In many ways, multistate sensitivity metrics behave in similar ways as standard occupancy sensitivities. When equilibrium occupancy rates are low, sensitivity to parameters related to colonisation is high, while sensitivity to persistence parameters is greater when equilibrium occupancy rates are high. Sensitivities can also provide guidance for managers when estimates of transition probabilities are not available. Synthesis and applications. Multistate models provide practitioners a flexible framework to define multiple, distinct occupied states and the ability to choose which state, or combination of states, is most relevant to questions and decisions about their own systems. In addition to standard multistate occupancy models, we provide an example of how a second-order Markov process can be modified to fit a multistate framework. Assuming the system is near equilibrium, our sensitivity analyses illustrate how to investigate the sensitivity of the system-specific equilibrium state(s) to changes in transition rates. Because management will typically act on these transition rates, sensitivity analyses can provide valuable information about the potential influence of different actions and when it may be prudent to shift the focus of management among the various transition rates. ?? 2011 The Authors. Journal of Applied Ecology ?? 2011 British Ecological Society.

  5. Study of constraints in using household NaCl salt for retrospective dosimetry

    NASA Astrophysics Data System (ADS)

    Elashmawy, M.

    2018-05-01

    Thermoluminescence (TL) characteristics of 5 different household NaCl salts and one analytical salt were determined to investigate the possible factors that affect the reliability of using household salt for retrospective dosimetry. Salts' TL sensitivities were found to be particle-size dependent and approached saturation at the largest size, whereas for salts that have the same particle size, the TL sensitivity depended on their origin. TL dependence on the particle size interprets significant variations in TL response reported in the literature for the same salt patch. The first TL readout indicated that all salts have similar glow curves with one distinctive peak. Typical second TL readout at two different doses showed a dramatic decrease in TL sensitivity associated with a significant change in the glow curve structure possessing two prominent peaks. Glow curve deconvolution (GCD) of the first TL readout for all salts yielded 6 individual glow peaks of first-order kinetics, whereas in GCD of second TL readouts, 5 individual glow peaks of second-order kinetics were obtained. Similarities in the glow curve structures of the first and second TL readouts suggest that additives such as KIO3 and MgCO3 have no effect on the TL process. Fading effect was evaluated for the salt of highest TL sensitivity, and it was found that the integral TL intensity decreased gradually and lost 40% of its initial value over 2 weeks, after which it remained constant. Results conclude that a household salt cannot be used for retrospective dosimetry without considering certain constraints such as the salt's origin and particle size. Furthermore, preparedness for radiological accidents and accurate dose reconstructions require that most of the commonly distributed household salt brands should be calibrated in advance and stored in a repository to be recalled in case of accidents.

  6. Automating the evaluation of flood damages: methodology and potential gains

    NASA Astrophysics Data System (ADS)

    Eleutério, Julian; Martinez, Edgar Daniel

    2010-05-01

    The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.

  7. Clinical evaluation and validation of laboratory methods for the diagnosis of Bordetella pertussis infection: Culture, polymerase chain reaction (PCR) and anti-pertussis toxin IgG serology (IgG-PT)

    PubMed Central

    Cassiday, Pamela K.; Pawloski, Lucia C.; Tatti, Kathleen M.; Martin, Monte D.; Briere, Elizabeth C.; Tondella, M. Lucia; Martin, Stacey W.

    2018-01-01

    Introduction The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Methods Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2–4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods—pertussis culture as the “gold standard,” composite reference standard analysis (CRS), and latent class analysis (LCA). Results Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Conclusions Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis. PMID:29652945

  8. Sensitivity of 4-Year-Olds to Featural and Second-Order Relational Changes in Face Distinctiveness

    ERIC Educational Resources Information Center

    McKone, Elinor; Boyer, Barbara L.

    2006-01-01

    Sensitivity to adult ratings of facial distinctiveness (how much an individual stands out in a crowd) has been demonstrated previously in children age 5 years or older. Experiment 1 extended this result to 4-year-olds using a "choose the more distinctive face" task. Children's patterns of choice across item pairs also correlated well with those of…

  9. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  10. Demodulation Radio Frequency Interference Effects in Operational Amplifier Circuits

    NASA Astrophysics Data System (ADS)

    Sutu, Yue-Hong

    A series of investigations have been carried out to determine RFI effects in analog circuits using monolithic integrated operational amplifiers (op amps) as active devices. The specific RFI effect investigated is how amplitude-modulated (AM) RF signals are demodulated in op amp circuits to produce undesired low frequency responses at AM-modulation frequency. The undesired demodulation responses were shown to be characterized by a second-order nonlinear transfer function. Four representative op amp types investigated were the 741 bipolar op amp, the LM10 bipolar op amp, the LF355 JFET-Bipolar op amp, and the CA081 MOS-Bipolar op amp. Two op amp circuits were investigated. The first circuit was a noninverting unity voltage gain buffer circuit. The second circuit was an inverting op amp configuration. In the second circuit, the investigation includes the effects of an RFI suppression capacitor in the feedback path. Approximately 30 units of each op amp type were tested to determine the statistical variations of RFI demodulation effects in the two op amp circuits. The Nonlinear Circuit Analysis Program, NCAP, was used to simulate the demodulation RFI response. In the simulation, the op amp was replaced with its incremental macromodel. Values of macromodel parameters were obtained from previous investigations and manufacturer's data sheets. Some key results of this work are: (1) The RFI demodulation effects are 10 to 20 dB lower in CA081 and LF355 FET-bipolar op amp than in 741 and LM10 bipolar op amp except above 40 MHz where the LM10 RFI response begins to approach that of CA081. (2) The experimental mean values for 30 741 op amps show that RFI demodulation responses in the inverting amplifier with a 27 pF feedback capacitor were suppressed from 10 to 35 dB over the RF frequency range 0.1 to 150 MHz except at 0.15 MHz where only 3.5 dB suppression was observed. (3) The NCAP program can predict RFI demodulation responses in 741 and LF355 unity gain buffer circuits within 6 and 7 dB respectively for RF frequencies 0.1 to 400 MHz except near the resonant frequencies for the LF355 circuit. (4) The NCAP simulations suggest that the resonances of the LF355 unity gain buffer circuit are related to small parasitic capacitance values of the order of 1 to 5 pF. (5) The NCAP sensitivity analysis indicates that variations in a second-order transfer function are sensitive to some macromodel parameters.

  11. A comparison of computer-assisted detection (CAD) programs for the identification of colorectal polyps: performance and sensitivity analysis, current limitations and practical tips for radiologists.

    PubMed

    Bell, L T O; Gandhi, S

    2018-06-01

    To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  12. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  13. Deception and false belief in paranoia: modelling theory of mind stories.

    PubMed

    Shryane, Nick M; Corcoran, Rhiannon; Rowse, Georgina; Moore, Rosanne; Cummins, Sinead; Blackwood, Nigel; Howard, Robert; Bentall, Richard P

    2008-01-01

    This study used Item Response Theory (IRT) to model the psychometric properties of a Theory of Mind (ToM) stories task. The study also aimed to determine whether the ability to understand states of false belief in others and the ability to understand another's intention to deceive are separable skills, and to establish which is more sensitive to the presence of paranoia. A large and diverse clinical and nonclinical sample differing in levels of depression and paranoid ideation performed a ToM stories task measuring false belief and deception at first and second order. A three-factor IRT model was found to best fit the data, consisting of first- and second-order deception factors and a single false-belief factor. The first-order deception and false-belief factors had good measurement properties at low trait levels, appropriate for samples with reduced ToM ability. First-order deception and false beliefs were both sensitive to paranoid ideation with IQ predicting performance on false belief items. Separable abilities were found to underlie performance on verbal ToM tasks. However, paranoia was associated with impaired performance on both false belief and deception understanding with clear impairment at the simplest level of mental state attribution.

  14. On the effects of grid ill-conditioning in three dimensional finite element vector potential magnetostatic field computations

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1990-01-01

    The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.

  15. Novel pH-Sensitive Lipid Based Exo-Endocytosis Tracers Reveal Fast Intermixing of Synaptic Vesicle Pools

    PubMed Central

    Kahms, Martin; Klingauf, Jürgen

    2018-01-01

    Styryl dyes and genetically encoded pH-sensitive fluorescent proteins like pHluorin are well-established tools for the optical analysis of synaptic vesicle (SV) recycling at presynaptic boutons. Here, we describe the development of a new class of fluorescent probes based on pH-sensitive organic dyes covalently bound to lipids, providing a promising complementary assay to genetically encoded fluorescent probes. These new optical tracers allow a pure read out of membrane turnover during synaptic activity and visualization of multiple rounds of stimulation-dependent SV recycling without genetic perturbation. Measuring the incorporation efficacy of different dye-labeled lipids into budding SVs, we did not observe an enrichment of lipids with affinity for liquid ordered membrane domains. But most importantly, we found no evidence for a static segregation of SVs into recycling and resting pools. A small but significant fraction of SVs that is reluctant to release during a first round of evoked activity can be exocytosed during a second bout of stimulation, showing fast intermixing of SV pools within seconds. Furthermore, we found that SVs recycling spontaneously have a higher chance to re-occupy release sites than SVs recycling during high-frequency evoked activity. In summary, our data provide strong evidence for a highly dynamic and use-dependent control of the fractions of releasable or resting SVs. PMID:29456492

  16. Dye adsorption mechanisms in TiO2 films, and their effects on the photodynamic and photovoltaic properties in dye-sensitized solar cells.

    PubMed

    Hwang, Kyung-Jun; Shim, Wang-Geun; Kim, Youngjin; Kim, Gunwoo; Choi, Chulmin; Kang, Sang Ook; Cho, Dae Won

    2015-09-14

    The adsorption mechanism for the N719 dye on a TiO2 electrode was examined by the kinetic and diffusion models (pseudo-first order, pseudo-second order, and intra-particle diffusion models). Among these methods, the observed adsorption kinetics are well-described using the pseudo-second order model. Moreover, the film diffusion process was the main controlling step of adsorption, which was analysed using a diffusion-based model. The photodynamic properties in dye-sensitized solar cells (DSSCs) were investigated using time-resolved transient absorption techniques. The photodynamics of the oxidized N719 species were shown to be dependent on the adsorption time, and also the adsorbed concentration of N719. The photovoltaic parameters (Jsc, Voc, FF and η) of this DSSC were determined in terms of the dye adsorption amounts. The solar cell performance correlates significantly with charge recombination and dye regeneration dynamics, which are also affected by the dye adsorption amounts. Therefore, the photovoltaic performance of this DSSC can be interpreted in terms of the adsorption kinetics and the photodynamics of oxidized N719.

  17. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  18. Ultra-sensitive flow measurement in individual nanopores through pressure--driven particle translocation.

    PubMed

    Gadaleta, Alessandro; Biance, Anne-Laure; Siria, Alessandro; Bocquet, Lyderic

    2015-05-07

    A challenge for the development of nanofluidics is to develop new instrumentation tools, able to probe the extremely small mass transport across individual nanochannels. Such tools are a prerequisite for the fundamental exploration of the breakdown of continuum transport in nanometric confinement. In this letter, we propose a novel method for the measurement of the hydrodynamic permeability of nanometric pores, by diverting the classical technique of Coulter counting to characterize a pressure-driven flow across an individual nanopore. Both the analysis of the translocation rate, as well as the detailed statistics of the dwell time of nanoparticles flowing across a single nanopore, allow us to evaluate the permeability of the system. We reach a sensitivity for the water flow down to a few femtoliters per second, which is more than two orders of magnitude better than state-of-the-art alternative methods.

  19. Fast determination of three-dimensional fibril orientation of type-I collagen via macroscopic chirality

    NASA Astrophysics Data System (ADS)

    Zhuo, Guan-Yu; Chen, Mei-Yu; Yeh, Chao-Yuan; Guo, Chin-Lin; Kao, Fu-Jen

    2017-01-01

    Polarization-resolved second harmonic generation (SHG) microscopy is appealing for studying structural proteins and well-organized biophotonic nanostructures, due to its highly sensitized structural specificity. In recent years, it has been used to investigate the chiroptical effect, particularly SHG circular dichroism (SHG-CD) in biological tissues. Although SHG-CD attributed to macromolecular structures has been demonstrated, the corresponding quantitative analysis and interpretation on how SHG correlates with second-order susceptibility χ(2) under circularly polarized excitations remains unclear. In this study, we demonstrate a method based on macroscopic chirality to elucidate the correlation between SHG-CD and the orientation angle of the molecular structure. By exploiting this approach, three-dimensional (3D) molecular orientation of type-I collagen is revealed with only two cross polarized SHG images (i.e., interactions of left and right circular polarizations) without acquiring an image stack of varying polarization.

  20. Plasmon-Enhanced Photocleaving Dynamics in Colloidal MicroRNA-Functionalized Silver Nanoparticles Monitored with Second Harmonic Generation.

    PubMed

    Kumal, Raju R; Abu-Laban, Mohammad; Landry, Corey R; Kruger, Blake; Zhang, Zhenyu; Hayes, Daniel J; Haber, Louis H

    2016-10-11

    The photocleaving dynamics of colloidal microRNA-functionalized nanoparticles are studied using time-dependent second harmonic generation (SHG) measurements. Model drug-delivery systems composed of oligonucleotides attached to either silver nanoparticles or polystyrene nanoparticles using a nitrobenzyl photocleavable linker are prepared and characterized. The photoactivated controlled release is observed to be most efficient on resonance at 365 nm irradiation, with pseudo-first-order rate constants that are linearly proportional to irradiation powers. Additionally, silver nanoparticles show a 6-fold plasmon enhancement in photocleaving efficiency over corresponding polystyrene nanoparticle rates, while our previous measurements on gold nanoparticles show a 2-fold plasmon enhancement compared to polystyrene nanoparticles. Characterizations including extinction spectroscopy, electrophoretic mobility, and fluorimetry measurements confirm the analysis from the SHG results. The real-time SHG measurements are shown to be a highly sensitive method for investigating plasmon-enhanced photocleaving dynamics in model drug delivery systems.

  1. A spatial analysis of Phytophthora ramorum symptom spread using second-order point pattern and GIS-based analyses

    Treesearch

    Mark Spencer; Kevin O' Hara

    2006-01-01

    Phytophthora ramorum is a major source of tanoak (Lithocarpus densiflorus) mortality in the tanoak/redwood (Sequoia sempervirens) forests of central California. This study presents a spatial analysis of the spread of the disease using second-order point pattern and GIS analyses. Our data set includes four plots...

  2. First-Order or Second-Order Kinetics? A Monte Carlo Answer

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2005-01-01

    Monte Carlo computational experiments reveal that the ability to discriminate between first- and second-order kinetics from least-squares analysis of time-dependent concentration data is better than implied in earlier discussions of the problem. The problem is rendered as simple as possible by assuming that the order must be either 1 or 2 and that…

  3. Application of satellite data in variational analysis for global cyclonic systems

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.

    1988-01-01

    The goal of the research is a variational data assimilation method that incorporates as dynamical constraints, the primitive equations for a moist, convectively unstable atmosphere and the radiative transfer equation. Variables to be adjusted include the three-dimensional vector wind, height, temperature, and moisture from rawinsonde data, and cloud-wind vectors, moisture, and radiance from satellite data. In order to facilitate thorough analysis of each of the model components, four variational models that divide the problem naturally according to increasing complexity were defined. The research performed during the second year fall into four areas: sensitivity studies involving Model 1; evaluation of Model 2; reformation of Model 1 for greater compatibility with Model 2; development of Model 3 (radiative transfer equation); and making the model more responsive to the observations.

  4. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  5. The effects of physical activity on impulsive choice: Influence of sensitivity to reinforcement amount and delay

    PubMed Central

    Strickland, Justin C.; Feinstein, Max A.; Lacy, Ryan T.; Smith, Mark A.

    2016-01-01

    Impulsive choice is a diagnostic feature and/or complicating factor for several psychological disorders and may be examined in the laboratory using delay-discounting procedures. Recent investigators have proposed using quantitative measures of analysis to examine the behavioral processes contributing to impulsive choice. The purpose of this study was to examine the effects of physical activity (i.e., wheel running) on impulsive choice in a single-response, discrete-trial procedure using two quantitative methods of analysis. To this end, rats were assigned to physical activity or sedentary groups and trained to respond in a delay-discounting procedure. In this procedure, one lever always produced one food pellet immediately, whereas a second lever produced three food pellets after a 0, 10, 20, 40, or 80-second delay. Estimates of sensitivity to reinforcement amount and sensitivity to reinforcement delay were determined using (1) a simple linear analysis and (2) an analysis of logarithmically transformed response ratios. Both analyses revealed that physical activity decreased sensitivity to reinforcement amount and sensitivity to reinforcement delay. These findings indicate that (1) physical activity has significant but functionally opposing effects on the behavioral processes that contribute to impulsive choice and (2) both quantitative methods of analysis are appropriate for use in single-response, discrete-trial procedures. PMID:26964905

  6. Single-ion quantum lock-in amplifier.

    PubMed

    Kotler, Shlomi; Akerman, Nitzan; Glickman, Yinnon; Keselman, Anna; Ozeri, Roee

    2011-05-05

    Quantum metrology uses tools from quantum information science to improve measurement signal-to-noise ratios. The challenge is to increase sensitivity while reducing susceptibility to noise, tasks that are often in conflict. Lock-in measurement is a detection scheme designed to overcome this difficulty by spectrally separating signal from noise. Here we report on the implementation of a quantum analogue to the classical lock-in amplifier. All the lock-in operations--modulation, detection and mixing--are performed through the application of non-commuting quantum operators to the electronic spin state of a single, trapped Sr(+) ion. We significantly increase its sensitivity to external fields while extending phase coherence by three orders of magnitude, to more than one second. Using this technique, we measure frequency shifts with a sensitivity of 0.42 Hz Hz(-1/2) (corresponding to a magnetic field measurement sensitivity of 15 pT Hz(-1/2)), obtaining an uncertainty of less than 10 mHz (350 fT) after 3,720 seconds of averaging. These sensitivities are limited by quantum projection noise and improve on other single-spin probe technologies by two orders of magnitude. Our reported sensitivity is sufficient for the measurement of parity non-conservation, as well as the detection of the magnetic field of a single electronic spin one micrometre from an ion detector with nanometre resolution. As a first application, we perform light shift spectroscopy of a narrow optical quadrupole transition. Finally, we emphasize that the quantum lock-in technique is generic and can potentially enhance the sensitivity of any quantum sensor. ©2011 Macmillan Publishers Limited. All rights reserved

  7. Altered sensitization patterns to sweet food stimuli in patients recovered from anorexia and bulimia nervosa.

    PubMed

    Wagner, Angela; Simmons, Alan N; Oberndorfer, Tyson A; Frank, Guido K W; McCurdy-McKinnon, Danyale; Fudge, Julie L; Yang, Tony T; Paulus, Martin P; Kaye, Walter H

    2015-12-30

    Recent studies show that higher-order appetitive neural circuitry may contribute to restricted eating in anorexia nervosa (AN) and overeating in bulimia nervosa (BN). The purpose of this study was to determine whether sensitization effects might underlie pathologic eating behavior when a taste stimulus is administered repeatedly. Recovered AN (RAN, n=14) and BN (RBN, n=15) subjects were studied in order to avoid the confounding effects of altered nutritional state. Functional magnetic resonance imaging (fMRI) measured higher-order brain response to repeated tastes of sucrose (caloric) and sucralose (non-caloric). To test sensitization, the neuronal response to the first and second administration was compared. RAN patients demonstrated a decreased sensitization to sucrose in contrast to RBN patients who displayed the opposite pattern, increased sensitization to sucrose. However, the latter was not as pronounced as in healthy control women (n=13). While both eating disorder subgroups showed increased sensitization to sucralose, the healthy controls revealed decreased sensitization. These findings could reflect on a neuronal level the high caloric intake of RBN during binges and the low energy intake for RAN. RAN seem to distinguish between high energy and low energy sweet stimuli while RBN do not. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  9. Stirling engine design manual

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    This manual is intended to serve both as an introduction to Stirling engine analysis methods and as a key to the open literature on Stirling engines. Over 800 references are listed and these are cross referenced by date of publication, author and subject. Engine analysis is treated starting from elementary principles and working through cycles analysis. Analysis methodologies are classified as first, second or third order depending upon degree of complexity and probable application; first order for preliminary engine studies, second order for performance prediction and engine optimization, and third order for detailed hardware evaluation and engine research. A few comparisons between theory and experiment are made. A second order design procedure is documented step by step with calculation sheets and a worked out example to follow. Current high power engines are briefly described and a directory of companies and individuals who are active in Stirling engine development is included. Much remains to be done. Some of the more complicated and potentially very useful design procedures are now only referred to. Future support will enable a more thorough job of comparing all available design procedures against experimental data which should soon be available.

  10. Factorial structure and psychometric properties of a brief version of the Reminiscence Functions Scale with Chinese older adults.

    PubMed

    Lou, Vivian W Q; Choy, Jacky C P

    2014-05-01

    The current study aims to examine the factorial structure and psychometric properties of a brief version of the Reminiscence Functions Scale (RFS), a 14-item assessment tool of reminiscence functions, with Chinese older adults. The scale, covering four reminiscence functions (boredom reduction, bitterness revival, problem solving, and identity) was translated from English into Chinese and administered to older adults (N=675). Confirmatory factor analysis and hierarchical confirmatory factor analysis were conducted to examine its factorial structure, and its psychometric properties and criterion validity were examined. Confirmatory factor analysis supports a second-order model comprising one second-order factor and four first-order factors of RFS. The Cronbach's alpha of the subscales ranged from 0.75 to 0.90. The brief RFS contains a second-order factorial structure. Its psychometric properties support it as a sound instrument for measuring reminiscence functions among Chinese older adults.

  11. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  12. Abstract Interpreters for Free

    NASA Astrophysics Data System (ADS)

    Might, Matthew

    In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.

  13. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  14. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    DOE PAGES

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2017-09-28

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemannmore » problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. Finally, the upwind scheme is shown to be robust and provide high-order accuracy.« less

  15. High-order upwind schemes for the wave equation on overlapping grids: Maxwell's equations in second-order form

    NASA Astrophysics Data System (ADS)

    Angel, Jordan B.; Banks, Jeffrey W.; Henshaw, William D.

    2018-01-01

    High-order accurate upwind approximations for the wave equation in second-order form on overlapping grids are developed. Although upwind schemes are well established for first-order hyperbolic systems, it was only recently shown by Banks and Henshaw [1] how upwinding could be incorporated into the second-order form of the wave equation. This new upwind approach is extended here to solve the time-domain Maxwell's equations in second-order form; schemes of arbitrary order of accuracy are formulated for general curvilinear grids. Taylor time-stepping is used to develop single-step space-time schemes, and the upwind dissipation is incorporated by embedding the exact solution of a local Riemann problem into the discretization. Second-order and fourth-order accurate schemes are implemented for problems in two and three space dimensions, and overlapping grids are used to treat complex geometry and problems with multiple materials. Stability analysis of the upwind-scheme on overlapping grids is performed using normal mode theory. The stability analysis and computations confirm that the upwind scheme remains stable on overlapping grids, including the difficult case of thin boundary grids when the traditional non-dissipative scheme becomes unstable. The accuracy properties of the scheme are carefully evaluated on a series of classical scattering problems for both perfect conductors and dielectric materials in two and three space dimensions. The upwind scheme is shown to be robust and provide high-order accuracy.

  16. An analysis of stream channel cross section technique as a means to determine anthropogenic change in second order streams at the Tenderfoot Creek Experimental Forest, Meagher County, Montana

    Treesearch

    Jeff Boice

    1999-01-01

    Five second order tributaries to Tenderfoot Creek were investigated: Upper Tenderfoot Creek, Sun Creek, Spring Park Creek, Bubbling Creek, and Stringer Creek. Second order reaches were initially located on 7.5 minute topographic maps using techniques first applied by Strahler (1952). Reach breaks were determined in the field through visual inspection. Vegetation type (...

  17. Applying causal mediation analysis to personality disorder research.

    PubMed

    Walters, Glenn D

    2018-01-01

    This article is designed to address fundamental issues in the application of causal mediation analysis to research on personality disorders. Causal mediation analysis is used to identify mechanisms of effect by testing variables as putative links between the independent and dependent variables. As such, it would appear to have relevance to personality disorder research. It is argued that proper implementation of causal mediation analysis requires that investigators take several factors into account. These factors are discussed under 5 headings: variable selection, model specification, significance evaluation, effect size estimation, and sensitivity testing. First, care must be taken when selecting the independent, dependent, mediator, and control variables for a mediation analysis. Some variables make better mediators than others and all variables should be based on reasonably reliable indicators. Second, the mediation model needs to be properly specified. This requires that the data for the analysis be prospectively or historically ordered and possess proper causal direction. Third, it is imperative that the significance of the identified pathways be established, preferably with a nonparametric bootstrap resampling approach. Fourth, effect size estimates should be computed or competing pathways compared. Finally, investigators employing the mediation method are advised to perform a sensitivity analysis. Additional topics covered in this article include parallel and serial multiple mediation designs, moderation, and the relationship between mediation and moderation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Models of second-order effects in metal-oxide-semiconductor field-effect transistors for computer applications

    NASA Technical Reports Server (NTRS)

    Benumof, Reuben; Zoutendyk, John; Coss, James

    1988-01-01

    Second-order effects in metal-oxide-semiconductor field-effect transistors (MOSFETs) are important for devices with dimensions of 2 microns or less. The short and narrow channel effects and drain-induced barrier lowering primarily affect threshold voltage, but formulas for drain current must also take these effects into account. In addition, the drain current is sensitive to channel length modulation due to pinch-off or velocity saturation and is diminished by electron mobility degradation due to normal and lateral electric fields in the channel. A model of a MOSFET including these considerations and emphasizing charge conservation is discussed.

  19. Cost-Effectiveness Analysis of Five Competing Strategies for the Management of Multiple Recurrent Community-Onset Clostridium difficile Infection in France

    PubMed Central

    Galperine, Tatiana; Denies, Fanette; Lannoy, Damien; Lenne, Xavier; Odou, Pascal; Guery, Benoit; Dervaux, Benoit

    2017-01-01

    Background Clostridium difficile infection (CDI) is characterized by high rates of recurrence, resulting in substantial health care costs. The aim of this study was to analyze the cost-effectiveness of treatments for the management of second recurrence of community-onset CDI in France. Methods We developed a decision-analytic simulation model to compare 5 treatments for the management of second recurrence of community-onset CDI: pulsed-tapered vancomycin, fidaxomicin, fecal microbiota transplantation (FMT) via colonoscopy, FMT via duodenal infusion, and FMT via enema. The model outcome was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life year (QALY) among the 5 treatments. ICERs were interpreted using a willingness-to-pay threshold of €32,000/QALY. Uncertainty was evaluated through deterministic and probabilistic sensitivity analyses. Results Three strategies were on the efficiency frontier: pulsed-tapered vancomycin, FMT via enema, and FMT via colonoscopy, in order of increasing effectiveness. FMT via duodenal infusion and fidaxomicin were dominated (i.e. less effective and costlier) by FMT via colonoscopy and FMT via enema. FMT via enema compared with pulsed-tapered vancomycin had an ICER of €18,092/QALY. The ICER for FMT via colonoscopy versus FMT via enema was €73,653/QALY. Probabilistic sensitivity analysis with 10,000 Monte Carlo simulations showed that FMT via enema was the most cost-effective strategy in 58% of simulations and FMT via colonoscopy was favored in 19% at a willingness-to-pay threshold of €32,000/QALY. Conclusions FMT via enema is the most cost-effective initial strategy for the management of second recurrence of community-onset CDI at a willingness-to-pay threshold of €32,000/QALY. PMID:28103289

  20. Cost-Effectiveness Analysis of Five Competing Strategies for the Management of Multiple Recurrent Community-Onset Clostridium difficile Infection in France.

    PubMed

    Baro, Emilie; Galperine, Tatiana; Denies, Fanette; Lannoy, Damien; Lenne, Xavier; Odou, Pascal; Guery, Benoit; Dervaux, Benoit

    2017-01-01

    Clostridium difficile infection (CDI) is characterized by high rates of recurrence, resulting in substantial health care costs. The aim of this study was to analyze the cost-effectiveness of treatments for the management of second recurrence of community-onset CDI in France. We developed a decision-analytic simulation model to compare 5 treatments for the management of second recurrence of community-onset CDI: pulsed-tapered vancomycin, fidaxomicin, fecal microbiota transplantation (FMT) via colonoscopy, FMT via duodenal infusion, and FMT via enema. The model outcome was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life year (QALY) among the 5 treatments. ICERs were interpreted using a willingness-to-pay threshold of €32,000/QALY. Uncertainty was evaluated through deterministic and probabilistic sensitivity analyses. Three strategies were on the efficiency frontier: pulsed-tapered vancomycin, FMT via enema, and FMT via colonoscopy, in order of increasing effectiveness. FMT via duodenal infusion and fidaxomicin were dominated (i.e. less effective and costlier) by FMT via colonoscopy and FMT via enema. FMT via enema compared with pulsed-tapered vancomycin had an ICER of €18,092/QALY. The ICER for FMT via colonoscopy versus FMT via enema was €73,653/QALY. Probabilistic sensitivity analysis with 10,000 Monte Carlo simulations showed that FMT via enema was the most cost-effective strategy in 58% of simulations and FMT via colonoscopy was favored in 19% at a willingness-to-pay threshold of €32,000/QALY. FMT via enema is the most cost-effective initial strategy for the management of second recurrence of community-onset CDI at a willingness-to-pay threshold of €32,000/QALY.

  1. Variations in algorithm implementation among quantitative texture analysis software packages

    NASA Astrophysics Data System (ADS)

    Foy, Joseph J.; Mitta, Prerana; Nowosatka, Lauren R.; Mendel, Kayla R.; Li, Hui; Giger, Maryellen L.; Al-Hallaq, Hania; Armato, Samuel G.

    2018-02-01

    Open-source texture analysis software allows for the advancement of radiomics research. Variations in texture features, however, result from discrepancies in algorithm implementation. Anatomically matched regions of interest (ROIs) that captured normal breast parenchyma were placed in the magnetic resonance images (MRI) of 20 patients at two time points. Six first-order features and six gray-level co-occurrence matrix (GLCM) features were calculated for each ROI using four texture analysis packages. Features were extracted using package-specific default GLCM parameters and using GLCM parameters modified to yield the greatest consistency among packages. Relative change in the value of each feature between time points was calculated for each ROI. Distributions of relative feature value differences were compared across packages. Absolute agreement among feature values was quantified by the intra-class correlation coefficient. Among first-order features, significant differences were found for max, range, and mean, and only kurtosis showed poor agreement. All six second-order features showed significant differences using package-specific default GLCM parameters, and five second-order features showed poor agreement; with modified GLCM parameters, no significant differences among second-order features were found, and all second-order features showed poor agreement. While relative texture change discrepancies existed across packages, these differences were not significant when consistent parameters were used.

  2. Multi-Scale Morphological Analysis of Conductance Signals in Vertical Upward Gas-Liquid Two-Phase Flow

    NASA Astrophysics Data System (ADS)

    Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying

    2016-11-01

    The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.

  3. Second harmonic generation and crystal growth of new chalcone derivatives

    NASA Astrophysics Data System (ADS)

    Patil, P. S.; Dharmaprakash, S. M.; Ramakrishna, K.; Fun, Hoong-Kun; Sai Santosh Kumar, R.; Narayana Rao, D.

    2007-05-01

    We report on the synthesis, crystal structure and optical characterization of chalcone derivatives developed for second-order nonlinear optics. The investigation of a series of five chalcone derivatives with the second harmonic generation powder test according to Kurtz and Perry revealed that these chalcones show efficient second-order nonlinear activity. Among them, high-quality single crystals of 3-Br-4'-methoxychalcone (3BMC) were grown by solvent evaporation solution growth technique. Grown crystals were characterized by X-ray powder diffraction (XRD), laser damage threshold, UV-vis-NIR and refractive index measurement studies. Infrared spectroscopy, thermogravimetric analysis and differential thermal analysis measurements were performed to study the molecular vibration and thermal behavior of 3BMC crystal. Thermal analysis does not show any structural phase transition.

  4. Monitoring by forward scatter radar techniques: an improved second-order analytical model

    NASA Astrophysics Data System (ADS)

    Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco

    2017-10-01

    In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.

  5. Controlling flexible structures with second order actuator dynamics

    NASA Technical Reports Server (NTRS)

    Inman, Daniel J.; Umland, Jeffrey W.; Bellos, John

    1989-01-01

    The control of flexible structures for those systems with actuators that are modeled by second order dynamics is examined. Two modeling approaches are investigated. First a stability and performance analysis is performed using a low order finite dimensional model of the structure. Secondly, a continuum model of the flexible structure to be controlled, coupled with lumped parameter second order dynamic models of the actuators performing the control is used. This model is appropriate in the modeling of the control of a flexible panel by proof-mass actuators as well as other beam, plate and shell like structural numbers. The model is verified with experimental measurements.

  6. Green analytical determination of emerging pollutants in environmental waters using excitation-emission photoinduced fluorescence data and multivariate calibration.

    PubMed

    Hurtado-Sánchez, María Del Carmen; Lozano, Valeria A; Rodríguez-Cáceres, María Isabel; Durán-Merás, Isabel; Escandar, Graciela M

    2015-03-01

    An eco-friendly strategy for the simultaneous quantification of three emerging pharmaceutical contaminants is presented. The proposed analytical method, which involves photochemically induced fluorescence matrix data combined with second-order chemometric analysis, was used for the determination of carbamazepine, ofloxacin and piroxicam in water samples of different complexity without the need of chromatographic separation. Excitation-emission photoinduced fluorescence matrices were obtained after UV irradiation, and processed with second-order algorithms. Only one of the tested algorithms was able to overcome the strong spectral overlapping among the studied pollutants and allowed their successful quantitation in very interferent media. The method sensitivity in superficial and underground water samples was enhanced by a simple solid-phase extraction with C18 membranes, which was successful for the extraction/preconcentration of the pollutants at trace levels. Detection limits in preconcentrated (1:125) real water samples ranged from 0.04 to 0.3 ng mL(-1). Relative prediction errors around 10% were achieved. The proposed strategy is significantly simpler and greener than liquid chromatography-mass spectrometry methods, without compromising the analytical quality of the results. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Study on the method of maintaining bathtub water temperature

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyan

    2017-05-01

    In order to make the water temperature constant and the spillage to its minimum, we use finite element method and grid transformation and have established an optimized model for people in the bathtub both in time and space, which is based on theories of heat convection and heat conduction and three-dimensional second-order equation. For the first question, we have worked out partial differential equations for three-dimensional heat convection. In the meantime, we also create an optimized temperature model in time and space by using initial conditions and boundary conditions. For the second question we have simulated the shape and volume of the tub and the human gestures in the tub based on the first question. As for the shape and volume of the tub, we draw conclusion that the tub whose surface area is little contains water with higher temperature. Thus, when we are designing bathtubs we can decrease the area so that we'll have less loss heat. For different gestures when people are bathing, we have found that gestures have no obvious influence on variations of water temperature. Finally, we did some simulating calculations, and did some analysis on precision and sensitivity

  8. GMR Biosensor Arrays: A System Perspective

    PubMed Central

    Hall, D. A.; Gaster, R. S.; Lin, T.; Osterfeld, S. J.; Han, S.; Murmann, B.; Wang, S. X.

    2010-01-01

    Giant magnetoresistive biosensors are becoming more prevalent for sensitive, quantifiable biomolecular detection. However, in order for magnetic biosensing to become competitive with current optical protein microarray technology, there is a need to increase the number of sensors while maintaining the high sensitivity and fast readout time characteristic of smaller arrays (1 – 8 sensors). In this paper, we present a circuit architecture scalable for larger sensor arrays (64 individually addressable sensors) while maintaining a high readout rate (scanning the entire array in less than 4 seconds). The system utilizes both time domain multiplexing and frequency domain multiplexing in order to achieve this scan rate. For the implementation, we propose a new circuit architecture that does not use a classical Wheatstone bridge to measure the small change in resistance of the sensor. Instead, an architecture designed around a transimpedance amplifier is employed. A detailed analysis of this architecture including the noise, distortion, and potential sources of errors is presented, followed by a global optimization strategy for the entire system comprising the magnetic tags, sensors, and interface electronics. To demonstrate the sensitivity, quantifiable detection of two blindly spiked samples of unknown concentrations has been performed at concentrations below the limit of detection for the enzyme-linked immunosorbent assay. Lastly, the multipexability and reproducibility of the system was demonstrated by simultaneously monitoring sensors functionalized with three unique proteins at different concentrations in real-time. PMID:20207130

  9. Reversible second-order conditional sequences in incidental sequence learning tasks.

    PubMed

    Pasquali, Antoine; Cleeremans, Axel; Gaillard, Vinciane

    2018-06-01

    In sequence learning tasks, participants' sensitivity to the sequential structure of a series of events often overshoots their ability to express relevant knowledge intentionally, as in generation tasks that require participants to produce either the next element of a sequence (inclusion) or a different element (exclusion). Comparing generation performance under inclusion and exclusion conditions makes it possible to assess the respective influences of conscious and unconscious learning. Recently, two main concerns have been expressed concerning such tasks. First, it is often difficult to design control sequences in such a way that they enable clear comparisons with the training material. Second, it is challenging to ask participants to perform appropriately under exclusion instructions, for the requirement to exclude familiar responses often leads them to adopt degenerate strategies (e.g., pushing on the same key all the time), which then need to be specifically singled out as invalid. To overcome both concerns, we introduce reversible second-order conditional (RSOC) sequences and show (a) that they elicit particularly strong transfer effects, (b) that dissociation of implicit and explicit influences becomes possible thanks to the removal of salient transitions in RSOCs, and (c) that exclusion instructions can be greatly simplified without losing sensitivity.

  10. Bioethics in Denmark. Moving from first- to second-order analysis?

    PubMed

    Nielsen, Morten Ebbe Juul; Andersen, Martin Marchman

    2014-07-01

    This article examines two current debates in Denmark--assisted suicide and the prioritization of health resources--and proposes that such controversial bioethical issues call for distinct philosophical analyses: first-order examinations, or an applied philosophy approach, and second-order examinations, what might be called a political philosophical approach. The authors argue that although first-order examination plays an important role in teasing out different moral points of view, in contemporary democratic societies, few, if any, bioethical questions can be resolved satisfactorily by means of first-order analyses alone, and that bioethics needs to engage more closely with second-order enquiries and the question of legitimacy in general.

  11. Unique proteomic signature for radiation sensitive patients; a comparative study between normo-sensitive and radiation sensitive breast cancer patients.

    PubMed

    Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak

    2015-06-01

    Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. [Application of a continual improvement approach to selecting diagnostic markers for acute pancreatitis in an emergency department].

    PubMed

    Salinas, María; Flores, Emilio; López-Garrigós, Maite; Díaz, Elena; Esteban, Patricia; Leiva-Salinas, Carlos

    2017-01-01

    To apply a continual improvement model to develop an algorithm for ordering laboratory tests to diagnose acute pancreatitis in a hospital emergency department. Quasi-experimental study using the continual improvement model (plan, do, check, adjust cycles) in 2 consecutive phases in emergency patients: amylase and lipase results were used to diagnose acute pancreatitis in the first phase; in the second, only lipase level was first determined; amylase testing was then ordered only if the lipase level fell within a certain range. We collected demographic data, number amylase and lipase tests ordered and the findings, final diagnosis, and the results of a questionnaire to evaluate satisfaction with emergency care. The first phase included 517 patients, of whom 20 had acute pancreatitis. For amylase testing sensitivity was 0.70; specificity, 0.85; positive predictive value (PPV), 17; and negative predictive value (NPV), 0.31. For lipase testing these values were sensitivity, 0.85; specificity, 0.96; PPV, 21, and NPV, 0.16. When both tests were done, sensitivity was 0.85; specificity 0.99; PPV, 85; and NPV, 0.15. The second phase included data for 4815 patients, 118 of whom had acute pancreatitis. The measures of diagnostic yield for the new algorithm were sensitivity, 0.92; specificity, 0.98; PPV, 46; and NPV, 0.08]. This study demonstrates a process for developing a protocol to guide laboratory testing in acute pancreatitis in the hospital emergency department. The proposed sequence of testing for pancreatic enzyme levels can be effective for diagnosing acute pancreatitis in patients with abdominal pain.

  13. Relationship between personality traits and vocational choice.

    PubMed

    Garcia-Sedeño, Manuel; Navarro, Jose I; Menacho, Inmaculada

    2009-10-01

    Summary.-The relationship between occupational preferences and personality traits was examined. A randomly chosen sample of 735 students (age range = 17 to 23 years; 50.5% male) in their last year of high school participated in this study. Participants completed Cattell's Sixteen Personality Factor-5 Questionnaire (16PF-5 Questionnaire) and the Kuder-C Professional Tendencies Questionnaire. Initial hierarchical cluster analysis categorized the participants into two groups by Kuder-C vocational factors: one showed a predilection for scientific or technological careers and the other a bias toward the humanities and social sciences. Based on these groupings, differences in 16PF-5 personality traits were analyzed and differences associated with three first-order personality traits (warmth, dominance, and sensitivity), three second-order factors (extraversion, control, and independence), and some areas of professional interest (mechanical, arithmetical artistic, persuasive, and welfare) were identified. The data indicated that there was congruency between personality profiles and vocational interests.

  14. Formulary evaluation of second-generation cephamycin derivatives using decision analysis.

    PubMed

    Barriere, S L

    1991-10-01

    Use of decision analysis in the formulary evaluation of the second-generation cephamycin derivatives cefoxitin, cefotetan, and cefmetazole is described. The rating system used was adapted from one used for the third-generation cephalosporins. Data on spectrum of activity, pharmacokinetics, adverse reactions, cost, and stability were taken from the published literature and the FDA-approved product labeling. The weighting scheme used for the third-generation cephalosporins was altered somewhat to reflect the more important aspects of the cephamycin derivatives and their potential role in surgical prophylaxis. Sensitivity analysis was done to assess the variability of the final scores when the assigned weights were varied within a reasonable range. Scores for cefmetazole and cefotetan were similar and did not differ significantly after sensitivity analysis. Cefoxitin scored significantly lower than the other two drugs. In the absence of data suggesting that the N-methyl thiotetrazole side chains of cefmetazole and cefotetan cause substantial toxicity, these two drugs can be considered the most cost-efficient members of the second-generation cephamycins.

  15. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Eno, Larry; Rabitz, Herschel

    1981-08-01

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.

  16. Rethinking pedagogy for second-order differential equations: a simplified approach to understanding well-posed problems

    NASA Astrophysics Data System (ADS)

    Tisdell, Christopher C.

    2017-07-01

    Knowing an equation has a unique solution is important from both a modelling and theoretical point of view. For over 70 years, the approach to learning and teaching 'well posedness' of initial value problems (IVPs) for second- and higher-order ordinary differential equations has involved transforming the problem and its analysis to a first-order system of equations. We show that this excursion is unnecessary and present a direct approach regarding second- and higher-order problems that does not require an understanding of systems.

  17. Understanding Collagen Organization in Breast Tumors to Predict and Prevent Metastasis

    DTIC Science & Technology

    2010-09-01

    towards blood vessels along fibers that are visible via second harmonic generation ( SHG ), and SHG is exquisitely sensitive to molecular ordering...Tumor cells that are moving along SHG -producing (i.e. ordered) collagen fibers move significantly faster than those cells that are moving independently...of SHG -producing fibers, and the extent of SHG -associated tumor cell motility is correlated with metastatic ability of the tumor model. Furthermore

  18. Pseudo second order kinetics and pseudo isotherms for malachite green onto activated carbon: comparison of linear and non-linear regression methods.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2006-08-25

    Pseudo second order kinetic expressions of Ho, Sobkowsk and Czerwinski, Blanachard et al. and Ritchie were fitted to the experimental kinetic data of malachite green onto activated carbon by non-linear and linear method. Non-linear method was found to be a better way of obtaining the parameters involved in the second order rate kinetic expressions. Both linear and non-linear regression showed that the Sobkowsk and Czerwinski and Ritchie's pseudo second order model were the same. Non-linear regression analysis showed that both Blanachard et al. and Ho have similar ideas on the pseudo second order model but with different assumptions. The best fit of experimental data in Ho's pseudo second order expression by linear and non-linear regression method showed that Ho pseudo second order model was a better kinetic expression when compared to other pseudo second order kinetic expressions. The amount of dye adsorbed at equilibrium, q(e), was predicted from Ho pseudo second order expression and were fitted to the Langmuir, Freundlich and Redlich Peterson expressions by both linear and non-linear method to obtain the pseudo isotherms. The best fitting pseudo isotherm was found to be the Langmuir and Redlich Peterson isotherm. Redlich Peterson is a special case of Langmuir when the constant g equals unity.

  19. Analysis of Dual-Order Backward Pumping Schemes in Distributed Raman Amplification System

    NASA Astrophysics Data System (ADS)

    Singh, Kulwinder; Patterh, Manjeet Singh; Bhamrah, Manjit Singh

    2018-04-01

    Backward pumping in fiber Raman amplifiers has been investigated in this paper in terms of on-off Raman gain, noise figure and optical signal-to-noise ratio. The results exhibit that with four first-order pumps and one second-order pump scheme can be employed to achieve 8.2 dB noise figure in 64 channel fiber optic communication system. It has also been reported that 2.65 dB gain ripple, 0.87 dB noise figure tilt and 2.02 dB OSNR tilt can be attained with the second-order pumping in fiber Raman amplifiers. The main advantage of the scheme is that only 50 mW second-order pump shows appreciable improvement in the system performance. It shows that further increase in first-order and second-order pump powers increase system noise implications.

  20. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  1. Epileptic seizure onset detection based on EEG and ECG data fusion.

    PubMed

    Qaraqe, Marwa; Ismail, Muhammad; Serpedin, Erchin; Zulfi, Haneef

    2016-05-01

    This paper presents a novel method for seizure onset detection using fused information extracted from multichannel electroencephalogram (EEG) and single-channel electrocardiogram (ECG). In existing seizure detectors, the analysis of the nonlinear and nonstationary ECG signal is limited to the time-domain or frequency-domain. In this work, heart rate variability (HRV) extracted from ECG is analyzed using a Matching-Pursuit (MP) and Wigner-Ville Distribution (WVD) algorithm in order to effectively extract meaningful HRV features representative of seizure and nonseizure states. The EEG analysis relies on a common spatial pattern (CSP) based feature enhancement stage that enables better discrimination between seizure and nonseizure features. The EEG-based detector uses logical operators to pool SVM seizure onset detections made independently across different EEG spectral bands. Two fusion systems are adopted. In the first system, EEG-based and ECG-based decisions are directly fused to obtain a final decision. The second fusion system adopts an override option that allows for the EEG-based decision to override the fusion-based decision in the event that the detector observes a string of EEG-based seizure decisions. The proposed detectors exhibit an improved performance, with respect to sensitivity and detection latency, compared with the state-of-the-art detectors. Experimental results demonstrate that the second detector achieves a sensitivity of 100%, detection latency of 2.6s, and a specificity of 99.91% for the MAJ fusion case. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Development of animal models and sandwich-ELISA tests to detect the allergenicity and antigenicity of fining agent residues in wines.

    PubMed

    Lifrani, Awatif; Dos Santos, Jacinthe; Dubarry, Michel; Rautureau, Michelle; Blachier, Francois; Tome, Daniel

    2009-01-28

    Food allergy can cause food-related anaphylaxis. Food allergen labeling is the principal means of protecting sensitized individuals. This motivated European Directive 2003/89 on the labeling of ingredients or additives that could trigger adverse reactions, which has been in effect since 2005. During this study, we developed animal models with allergy to ovalbumin, caseinate, and isinglass in order to be able to detect fining agent residues that could induce anaphylactic reactions in sensitized mice. The second aim of the study was to design sandwich ELISA tests specific to each fining agent in order to detect their residue antigenicity, both during wine processing and in commercially available bottled wines. Sensitized mice and sandwich ELISA methods were established to test a vast panel of wines. The results showed that although they were positive to our highly sensitive sandwich-ELISA tests, some commercially available wines are not allergenic in sensitized mice. Commercially available bottled wines made using standardized processes, fining, maturation, and filtration, do not therefore represent any risk of anaphylactic reactions in sensitized mice.

  3. Effects of Heterogeniety on Spatial Pattern Analysis of Wild Pistachio Trees in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Rezayan, F.

    2014-10-01

    Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.

  4. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  5. Continuous and Discrete Structured Population Models with Applications to Epidemiology and Marine Mammals

    NASA Astrophysics Data System (ADS)

    Tang, Tingting

    In this dissertation, we develop structured population models to examine how changes in the environmental affect population processes. In Chapter 2, we develop a general continuous time size structured model describing a susceptible-infected (SI) population coupled with the environment. This model applies to problems arising in ecology, epidemiology, and cell biology. The model consists of a system of quasilinear hyperbolic partial differential equations coupled with a system of nonlinear ordinary differential equations that represent the environment. We develop a second-order high resolution finite difference scheme to numerically solve the model. Convergence of this scheme to a weak solution with bounded total variation is proved. We numerically compare the second order high resolution scheme with a first order finite difference scheme. Higher order of convergence and high resolution property are observed in the second order finite difference scheme. In addition, we apply our model to a multi-host wildlife disease problem, questions regarding the impact of the initial population structure and transition rate within each host are numerically explored. In Chapter 3, we use a stage structured matrix model for wildlife population to study the recovery process of the population given an environmental disturbance. We focus on the time it takes for the population to recover to its pre-event level and develop general formulas to calculate the sensitivity or elasticity of the recovery time to changes in the initial population distribution, vital rates and event severity. Our results suggest that the recovery time is independent of the initial population size, but is sensitive to the initial population structure. Moreover, it is more sensitive to the reduction proportion to the vital rates of the population caused by the catastrophe event relative to the duration of impact of the event. We present the potential application of our model to the amphibian population dynamic and the recovery of a certain plant population. In addition, we explore, in details, the application of the model to the sperm whale population in Gulf of Mexico after the Deepwater Horizon oil spill. In Chapter 4, we summarize the results from Chapter 2 and Chapter 3 and explore some further avenues of our research.

  6. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.

    PubMed

    Shelley, M J; Tao, L

    2001-01-01

    To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.

  7. Fast smooth second-order sliding mode control for systems with additive colored noises.

    PubMed

    Yang, Pengfei; Fang, Yangwang; Wu, Youli; Liu, Yunxia; Zhang, Danxu

    2017-01-01

    In this paper, a fast smooth second-order sliding mode control is presented for a class of stochastic systems with enumerable Ornstein-Uhlenbeck colored noises. The finite-time mean-square practical stability and finite-time mean-square practical reachability are first introduced. Instead of treating the noise as bounded disturbance, the stochastic control techniques are incorporated into the design of the controller. The finite-time convergence of the prescribed sliding variable dynamics system is proved by using stochastic Lyapunov-like techniques. Then the proposed sliding mode controller is applied to a second-order nonlinear stochastic system. Simulation results are presented comparing with smooth second-order sliding mode control to validate the analysis.

  8. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  9. Time domain reflectometry waveform analysis with second order bounded mean oscillation

    USDA-ARS?s Scientific Manuscript database

    Tangent-line methods and adaptive waveform interpretation with Gaussian filtering (AWIGF) have been proposed for determining reflection positions of time domain reflectometry (TDR) waveforms. However, the accuracy of those methods is limited for short probe TDR sensors. Second order bounded mean osc...

  10. A gustatory second-order neuron that connects sucrose-sensitive primary neurons and a distinct region of the gnathal ganglion in the Drosophila brain

    PubMed Central

    Miyazaki, Takaaki; Lin, Tzu-Yang; Ito, Kei; Lee, Chi-Hon; Stopfer, Mark

    2016-01-01

    Although the gustatory system provides animals with sensory cues important for food choice and other critical behaviors, little is known about neural circuitry immediately following gustatory sensory neurons (GSNs). Here, we identify and characterize a bilateral pair of gustatory second-order neurons in Drosophila. Previous studies identified GSNs that relay taste information to distinct subregions of the primary gustatory center (PGC) in the gnathal ganglia (GNG). To identify candidate gustatory second-order neurons (G2Ns) we screened ~5,000 GAL4 driver strains for lines that label neural fibers innervating the PGC. We then combined GRASP (GFP reconstitution across synaptic partners) with presynaptic labeling to visualize potential synaptic contacts between the dendrites of the candidate G2Ns and the axonal terminals of Gr5a-expressing GSNs, which are known to respond to sucrose. Results of the GRASP analysis, followed by a single cell analysis by FLPout recombination, revealed a pair of neurons that contact Gr5a axon terminals in both brain hemispheres, and send axonal arborizations to a distinct region outside the PGC but within the GNG. To characterize the input and output branches, respectively, we expressed fluorescence-tagged acetylcholine receptor subunit (Dα7) and active-zone marker (Brp) in the G2Ns. We found that G2N input sites overlaid GRASP-labeled synaptic contacts to Gr5a neurons, while presynaptic sites were broadly distributed throughout the neurons’ arborizations. GRASP analysis and further tests with the Syb-GRASP method suggested that the identified G2Ns receive synaptic inputs from Gr5a-expressing GSNs, but not Gr66a-expressing GSNs, which respond to caffeine. The identified G2Ns relay information from Gr5a-expressing GSNs to distinct regions in the GNG, and are distinct from other, recently identified gustatory projection neurons, which relay information about sugars to a brain region called the antennal mechanosensory and motor center (AMMC). Our findings suggest unexpected complexity for taste information processing in the first relay of the gustatory system. PMID:26004543

  11. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  12. Frequency dependence of sensitivities in second-order RC active filters

    NASA Astrophysics Data System (ADS)

    Kunieda, T.; Hiramatsu, Y.; Fukui, A.

    1980-02-01

    This paper presents that gain and phase sensitivities to some element in biquadratic filters approximately constitute a circle on the complex sensitivity plane, provided that the quality factor Q of the circuit is appreciably larger than unity. Moreover, the group delay sensitivity is represented by the imaginary part of a cardioid. Using these results, bounds of maximum values of gain, phase, and group delay sensitivities are obtained. Further, it is proved that the maximum values of these sensitivities can be simultaneously minimized by minimizing the absolute value of the transfer function sensitivity at the center frequency provided that w(0)-sensitivities are constant and do not contain design parameters. Next, a statistical variability measure for the optimal-filter design is proposed. Finally, the relation between some variability measures proposed to the present time is made clear.

  13. Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson's Disease.

    PubMed

    San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B; Lipton, Richard B; Pullman, Seth; Saunders-Pullman, Rachel

    2016-01-01

    Pre-clinical markers of Parkinson's Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD.

  14. Digitized Spiral Drawing: A Possible Biomarker for Early Parkinson’s Disease

    PubMed Central

    San Luciano, Marta; Wang, Cuiling; Ortega, Roberto A.; Yu, Qiping; Boschung, Sarah; Soto-Valencia, Jeannie; Bressman, Susan B.; Lipton, Richard B.; Pullman, Seth; Saunders-Pullman, Rachel

    2016-01-01

    Introduction Pre-clinical markers of Parkinson’s Disease (PD) are needed, and to be relevant in pre-clinical disease, they should be quantifiably abnormal in early disease as well. Handwriting is impaired early in PD and can be evaluated using computerized analysis of drawn spirals, capturing kinematic, dynamic, and spatial abnormalities and calculating indices that quantify motor performance and disability. Digitized spiral drawing correlates with motor scores and may be more sensitive in detecting early changes than subjective ratings. However, whether changes in spiral drawing are abnormal compared with controls and whether changes are detected in early PD are unknown. Methods 138 PD subjects (50 with early PD) and 150 controls drew spirals on a digitizing tablet, generating x, y, z (pressure) data-coordinates and time. Derived indices corresponded to overall spiral execution (severity), shape and kinematic irregularity (second order smoothness, first order zero-crossing), tightness, mean speed and variability of spiral width. Linear mixed effect adjusted models comparing these indices and cross-validation were performed. Receiver operating characteristic analysis was applied to examine discriminative validity of combined indices. Results All indices were significantly different between PD cases and controls, except for zero-crossing. A model using all indices had high discriminative validity (sensitivity = 0.86, specificity = 0.81). Discriminative validity was maintained in patients with early PD. Conclusion Spiral analysis accurately discriminates subjects with PD and early PD from controls supporting a role as a promising quantitative biomarker. Further assessment is needed to determine whether spiral changes are PD specific compared with other disorders and if present in pre-clinical PD. PMID:27732597

  15. Socio-climatic Exposure of an Afghan Poppy Farmer

    NASA Astrophysics Data System (ADS)

    Mankin, J. S.; Diffenbaugh, N. S.

    2011-12-01

    Many posit that climate impacts from anthropogenic greenhouse gas emissions will have consequences for the natural and agricultural systems on which humans rely for food, energy, and livelihoods, and therefore, on stability and human security. However, many of the potential mechanisms of action in climate impacts and human systems response, as well as the differential vulnerabilities of such systems, remain underexplored and unquantified. Here I present two initial steps necessary to characterize and quantify the consequences of climate change for farmer livelihood in Afghanistan, given both climate impacts and farmer vulnerabilities. The first is a conceptual model mapping the potential relationships between Afghanistan's climate, the winter agricultural season, and the country's political economy of violence and instability. The second is a utility-based decision model for assessing farmer response sensitivity to various climate impacts based on crop sensitivities. A farmer's winter planting decision can be modeled roughly as a tradeoff between cultivating the two crops that dominate the winter growing season-opium poppy (a climate tolerant cash crop) and wheat (a climatically vulnerable crop grown for household consumption). Early sensitivity analysis results suggest that wheat yield dominates farmer decision making variability; however, such initial results may dependent on the relative parameter ranges of wheat and poppy yields. Importantly though, the variance in Afghanistan's winter harvest yields of poppy and wheat is tightly linked to household livelihood and thus, is indirectly connected to the wider instability and insecurity within the country. This initial analysis motivates my focused research on the sensitivity of these crops to climate variability in order to project farmer well-being and decision sensitivity in a warmer world.

  16. Extending High-Order Flux Operators on Spherical Icosahedral Grids and Their Applications in the Framework of a Shallow Water Model

    NASA Astrophysics Data System (ADS)

    Zhang, Yi

    2018-01-01

    This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.

  17. Second-harmonic generation microscopy of tooth

    NASA Astrophysics Data System (ADS)

    Kao, Fu-Jen; Wang, Yung-Shun; Huang, Mao-Kuo; Huang, Sheng-Lung; Cheng, Ping C.

    2000-07-01

    In this study, we have developed a high performance microscopic system to perform second-harmonic (SH)imaging on a tooth. The high sensitivity of the system allows an acquisition rate of 300 seconds/frame with a resolution at 512x512 pixels. The surface SH signal generated from the tooth is also carefully verified through micro-spectroscopy, polarization rotation, and wavelength tuning. In this way, we can ensure the authenticity of the signal. The enamel that encapsulates the dentine is known to possess highly ordered structures. The anisotrophy of the structure is revealed in the microscopic SH images of the tooth sample.

  18. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  19. Developmental Changes in Face Processing Skills.

    ERIC Educational Resources Information Center

    Mondloch, Catherine J.; Geldart, Sybil; Maurer, Daphne; Le Grand, Richard

    2003-01-01

    Two experiments examined the impact of slow development of processing differences among faces in the spacing among facial features (second-order relations). Computerized tasks involving various face-processing skills were used. Results of experiment with 6-, 8-, and 10-year-olds and with adults indicated that slow development of sensitivity to…

  20. Alteration of sensitivity of intratumor quiescent and total cells to gamma-rays following thermal neutron irradiation with or without 10B-compound.

    PubMed

    Masunaga, S; Ono, K; Suzuki, M; Sakurai, Y; Kobayashi, T; Takagaki, M; Kinashi, Y; Akaboshi, M

    2000-02-01

    Changes in the sensitivity of intratumor quiescent (Q) and total cells to gamma-rays following thermal neutron irradiation with or without 10B-compound were examined. 5-Bromo-2'-deoxyuridine (BrdU) was injected to SCC VII tumor-bearing mice intraperitoneally 10 times to label all the proliferating (P) tumor cells. As priming irradiation, thermal neutrons alone or thermal neutrons with 10B-labeled sodium borocaptate (BSH) or dl-p-boronophenylalanine (BPA) were administered. The tumor-bearing mice then received a series of gamma-ray radiation doses, 0 through 24 h after the priming irradiation. During this period, no BrdU was administered. Immediately after the second irradiation, the tumors were excised, minced, and trypsinized. Following incubation of tumor cells with cytokinesis blocker, the micronucleus (MN) frequency in cells without BrdU labeling (= Q cells at the time of priming irradiation) was determined using immunofluorescence staining for BrdU. The MN frequency in the total (P + Q) tumor cells was determined from the tumors that were not pretreated with BrdU before the priming irradiation. To determine the BrdU-labeled cell ratios in the tumors at the time of the second irradiation, each group also included mice that were continuously administered BrdU until just before the second irradiation using mini-osmotic pumps which had been implanted subcutaneously 5 days before the priming irradiation. In total cells, during the interval between the two irradiations, the tumor sensitivity to gamma-rays relative to that immediately after priming irradiation decreased with the priming irradiation ranking in the following order: thermal neutrons only > thermal neutrons with BSH > thermal neutrons with BPA. In contrast, in Q cells, during that time the sensitivity increased in the following order: thermal neutrons only < thermal neutrons with BSH < thermal neutrons with BPA. The longer the interval between the two irradiations, the higher was the BrdU-labeled cell ratio at the second irradiation. The labeled cell ratio at the same time point after each priming irradiation increased in the following order: thermal neutrons only < thermal neutrons with BSH < thermal neutrons with BPA. These findings indicated that the use of 10B-compound, especially BPA, in thermal neutron irradiation causes the recruitment from the Q to P population.

  1. The structure of paranoia in the general population.

    PubMed

    Bebbington, Paul E; McBride, Orla; Steel, Craig; Kuipers, Elizabeth; Radovanovic, Mirjana; Brugha, Traolach; Jenkins, Rachel; Meltzer, Howard I; Freeman, Daniel

    2013-06-01

    Psychotic phenomena appear to form a continuum with normal experience and beliefs, and may build on common emotional interpersonal concerns. We tested predictions that paranoid ideation is exponentially distributed and hierarchically arranged in the general population, and that persecutory ideas build on more common cognitions of mistrust, interpersonal sensitivity and ideas of reference. Items were chosen from the Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II) questionnaire and the Psychosis Screening Questionnaire in the second British National Survey of Psychiatric Morbidity (n = 8580), to test a putative hierarchy of paranoid development using confirmatory factor analysis, latent class analysis and factor mixture modelling analysis. Different types of paranoid ideation ranged in frequency from less than 2% to nearly 30%. Total scores on these items followed an almost perfect exponential distribution (r = 0.99). Our four a priori first-order factors were corroborated (interpersonal sensitivity; mistrust; ideas of reference; ideas of persecution). These mapped onto four classes of individual respondents: a rare, severe, persecutory class with high endorsement of all item factors, including persecutory ideation; a quasi-normal class with infrequent endorsement of interpersonal sensitivity, mistrust and ideas of reference, and no ideas of persecution; and two intermediate classes, characterised respectively by relatively high endorsement of items relating to mistrust and to ideas of reference. The paranoia continuum has implications for the aetiology, mechanisms and treatment of psychotic disorders, while confirming the lack of a clear distinction from normal experiences and processes.

  2. Examining the accuracy of the infinite order sudden approximation using sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eno, L.; Rabitz, H.

    1981-08-15

    A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h/sub 0/ into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h/sub 0/ in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result ismore » obtained for the effect of h/sub 0/ on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H/sub 2/ system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.« less

  3. Effects of Second-Order Hydrodynamics on a Semisubmersible Floating Offshore Wind Turbine: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayati, I.; Jonkman, J.; Robertson, A.

    2014-07-01

    The objective of this paper is to assess the second-order hydrodynamic effects on a semisubmersible floating offshore wind turbine. Second-order hydrodynamics induce loads and motions at the sum- and difference-frequencies of the incident waves. These effects have often been ignored in offshore wind analysis, under the assumption that they are significantly smaller than first-order effects. The sum- and difference-frequency loads can, however, excite eigenfrequencies of the system, leading to large oscillations that strain the mooring system or vibrations that cause fatigue damage to the structure. Observations of supposed second-order responses in wave-tank tests performed by the DeepCwind consortium at themore » MARIN offshore basin suggest that these effects might be more important than originally expected. These observations inspired interest in investigating how second-order excitation affects floating offshore wind turbines and whether second-order hydrodynamics should be included in offshore wind simulation tools like FAST in the future. In this work, the effects of second-order hydrodynamics on a floating semisubmersible offshore wind turbine are investigated. Because FAST is currently unable to account for second-order effects, a method to assess these effects was applied in which linearized properties of the floating wind system derived from FAST (including the 6x6 mass and stiffness matrices) are used by WAMIT to solve the first- and second-order hydrodynamics problems in the frequency domain. The method has been applied to the OC4-DeepCwind semisubmersible platform, supporting the NREL 5-MW baseline wind turbine. The loads and response of the system due to the second-order hydrodynamics are analysed and compared to first-order hydrodynamic loads and induced motions in the frequency domain. Further, the second-order loads and induced response data are compared to the loads and motions induced by aerodynamic loading as solved by FAST.« less

  4. 2D Decision-Making for Multi-Criteria Design Optimization

    DTIC Science & Technology

    2006-05-01

    participating in the same subproblem, information on the tradeoffs between different subproblems is obtained from a sensitivity analysis and used for...accomplished by some other mechanism. For the coordination between subproblem, we use the lexicographical ordering approach for multicriteria ...Sensitivity analysis Our approach uses sensitivity results from nonlinear programming (Fiacco, 1983; Luenberger, 2003), for which we first

  5. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  6. The Higgs vacuum uplifted: revisiting the electroweak phase transition with a second Higgs doublet

    NASA Astrophysics Data System (ADS)

    Dorsch, G. C.; Huber, S. J.; Mimasu, K.; No, J. M.

    2017-12-01

    The existence of a second Higgs doublet in Nature could lead to a cosmological first order electroweak phase transition and explain the origin of the matter-antimatter asymmetry in the Universe. We explore the parameter space of such a two-Higgs-doublet-model and show that a first order electroweak phase transition strongly correlates with a significant uplifting of the Higgs vacuum w.r.t. its Standard Model value. We then obtain the spectrum and properties of the new scalars H 0, A 0 and H ± that signal such a phase transition, showing that the decay A 0 → H 0 Z at the LHC and a sizable deviation in the Higgs self-coupling λ hhh from its SM value are sensitive indicators of a strongly first order electroweak phase transition in the 2HDM.

  7. Low-sensitivity, frequency-selective amplifier circuits for hybrid and bipolar fabrication.

    NASA Technical Reports Server (NTRS)

    Pi, C.; Dunn, W. R., Jr.

    1972-01-01

    A network is described which is suitable for realizing a low-sensitivity high-Q second-order frequency-selective amplifier for high-frequency operation. Circuits are obtained from this network which are well suited for realizing monolithic integrated circuits and which do not require any process steps more critical than those used for conventional monolithic operational and video amplifiers. A single chip version using compatible thin-film techniques for the frequency determination elements is then feasible. Center frequency and bandwidth can be set independently by trimming two resistors. The frequency selective circuits have a low sensitivity to the process variables, and the sensitivity of the center frequency and bandwidth to changes in temperature is very low.

  8. Fast smooth second-order sliding mode control for stochastic systems with enumerable coloured noises

    NASA Astrophysics Data System (ADS)

    Yang, Peng-fei; Fang, Yang-wang; Wu, You-li; Zhang, Dan-xu; Xu, Yang

    2018-01-01

    A fast smooth second-order sliding mode control is presented for a class of stochastic systems driven by enumerable Ornstein-Uhlenbeck coloured noises with time-varying coefficients. Instead of treating the noise as bounded disturbance, the stochastic control techniques are incorporated into the design of the control. The finite-time mean-square practical stability and finite-time mean-square practical reachability are first introduced. Then the prescribed sliding variable dynamic is presented. The sufficient condition guaranteeing its finite-time convergence is given and proved using stochastic Lyapunov-like techniques. The proposed sliding mode controller is applied to a second-order nonlinear stochastic system. Simulation results are given comparing with smooth second-order sliding mode control to validate the analysis.

  9. Nonintravenous rescue medications for pediatric status epilepticus: A cost-effectiveness analysis.

    PubMed

    Sánchez Fernández, Iván; Gaínza-Lein, Marina; Loddenkemper, Tobias

    2017-08-01

    To quantify the cost-effectiveness of rescue medications for pediatric status epilepticus: rectal diazepam, nasal midazolam, buccal midazolam, intramuscular midazolam, and nasal lorazepam. Decision analysis model populated with effectiveness data from the literature and cost data from publicly available market prices. The primary outcome was cost per seizure stopped ($/SS). One-way sensitivity analyses and second-order Monte Carlo simulations evaluated the robustness of the results across wide variations of the input parameters. The most cost-effective rescue medication was buccal midazolam (incremental cost-effectiveness ratio ([ICER]: $13.16/SS) followed by nasal midazolam (ICER: $38.19/SS). Nasal lorazepam (ICER: -$3.8/SS), intramuscular midazolam (ICER: -$64/SS), and rectal diazepam (ICER: -$2,246.21/SS) are never more cost-effective than the other options at any willingness to pay. One-way sensitivity analysis showed the following: (1) at its current effectiveness, rectal diazepam would become the most cost-effective option only if its cost was $6 or less, and (2) at its current cost, rectal diazepam would become the most cost-effective option only if effectiveness was higher than 0.89 (and only with very high willingness to pay of $2,859/SS to $31,447/SS). Second-order Monte Carlo simulations showed the following: (1) nasal midazolam and intramuscular midazolam were the more effective options; (2) the more cost-effective option was buccal midazolam for a willingness to pay from $14/SS to $41/SS and nasal midazolam for a willingness to pay above $41/SS; (3) cost-effectiveness overlapped for buccal midazolam, nasal lorazepam, intramuscular midazolam, and nasal midazolam; and (4) rectal diazepam was not cost-effective at any willingness to pay, and this conclusion remained extremely robust to wide variations of the input parameters. For pediatric status epilepticus, buccal midazolam and nasal midazolam are the most cost-effective nonintravenous rescue medications in the United States. Rectal diazepam is not a cost-effective alternative, and this conclusion remains extremely robust to wide variations of the input parameters. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  10. Accuracy of perturbative master equations.

    PubMed

    Fleming, C H; Cummings, N I

    2011-03-01

    We consider open quantum systems with dynamics described by master equations that have perturbative expansions in the system-environment interaction. We show that, contrary to intuition, full-time solutions of order-2n accuracy require an order-(2n+2) master equation. We give two examples of such inaccuracies in the solutions to an order-2n master equation: order-2n inaccuracies in the steady state of the system and order-2n positivity violations. We show how these arise in a specific example for which exact solutions are available. This result has a wide-ranging impact on the validity of coupling (or friction) sensitive results derived from second-order convolutionless, Nakajima-Zwanzig, Redfield, and Born-Markov master equations.

  11. Physical analysis and second-order modelling of an unsteady turbulent flow - The oscillating boundary layer on a flat plate

    NASA Technical Reports Server (NTRS)

    Ha Minh, H.; Viegas, J. R.; Rubesin, M. W.; Spalart, P.; Vandromme, D. D.

    1989-01-01

    The turbulent boundary layer under a freestream whose velocity varies sinusoidally in time around a zero mean is computed using two second order turbulence closure models. The time or phase dependent behavior of the Reynolds stresses are analyzed and results are compared to those of a previous SPALART-BALDWIN direct simulation. Comparisons show that the second order modeling is quite satisfactory for almost all phase angles, except in the relaminarization period where the computations lead to a relatively high wall shear stress.

  12. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  13. Studies of the mass spectrometer of the PALOMA instrument dedicated to Mars atmosphere analysis from a landed platform

    NASA Astrophysics Data System (ADS)

    Goulpeau, G.; Berthelier, J.-J.; Covinhes, J.; Chassefière, E.; Jambon, A.; Agrinier, P.; Sarda, Ph.

    2003-04-01

    An instrument to analyze the molecular, elemental and isotopic composition of Mars atmosphere from a landed platform is being developed under CNES funding. This instrument, called PALOMA (PAyload for Local Observation of Mars Atmosphere), will be proposed in response to the AO for the instrumentation of the NASA Mars Smart Lander mission, planned to be launched in 2009. It might be part as well of the EXOMARS mission presently studied at ESA in the frame of the Aurora program. Noble gases (He, Ne, Ar, Xr, Xe), stable isotopes (C, H, O, N) and trace constituents of astrobiological interest, like CH4, H2CO, N2O, H2S, will be analyzed by using a system of gas purification and separation, coupled with a mass spectrometer. Isotopic ratios have to be measured with an accuracy of about 1‰, or better, in order to provide a clear diagnostic of possible life signatures, to allow a detailed comparison of Earth and Mars atmospheric fractionation patterns, finally to accurately disentangle escape, climatic, geochemical and hypothesized biological effects. In order to reach these high sensitivity levels, two spectrometers of complitely different conceptions have been developed. The first one is constituted of conscutive electrostatic and magnetic sectors. It’s an application of E. G. Johnson and A. O. Nier’s previous work in that domain. Theirs parameters have been calculated in a way both angular and energetic optical aberrations from the two fields compensate each other to the second order. Simulated flights of ions in the resulting electromagnetic optic forshadow the effectiveness of the instrument. The second spectrometer is of the time of flight type. Its developpement, as a possible alternative to the magnetic system, shows the TOF spectrometer as an instrument allying great sensitivity and reduiced weight and dimensions.

  14. Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Meyer, Bettina; Schneider, Tapio

    2017-04-01

    There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.

  15. Split Node and Stress Glut Methods for Dynamic Rupture Simulations in Finite Elements.

    NASA Astrophysics Data System (ADS)

    Ramirez-Guzman, L.; Bielak, J.

    2008-12-01

    I present two numerical techniques to solve the Dynamic problem. I revisit and modify the Split Node approach and introduce a Stress Glut type Method. Both algorithms are implemented using a iso/sub- parametric FEM solver. In the first case, I discuss the formulation and perform an analysis of convergence for different orders of approximation for the acoustic case. I describe the algorithm of the second methodology as well as the assumptions made. The key to the new technique is to have an accurate representation of the traction. Thus, I devote part of the discussion to analyze the tractions for a simple example. The sensitivity of the method is tested by comparing against Split Node solutions.

  16. A social cybernetic analysis of simulation-based, remotely delivered medical skills training in an austere environment: developing a test bed for spaceflight medicine.

    PubMed

    Musson, David M; Doyle, Thomas E

    2012-01-01

    This paper describes analysis of medical skills training exercises that were conducted at an arctic research station. These were conducted as part of an ongoing effort to establish high fidelity medical simulation test bed capabilities in remote and extreme "space analogue" environments for the purpose studying medical care in spaceflight. The methodological orientation followed by the authors is that of "second order cybernetics," or the science of studying human systems where the observer is involved within the system in question. Analyses presented include the identification of three distinct phases of the training activity, and two distinct levels of work groups-- termed "first-order teams" and "second-order teams." Depending on the phase of activity, first-order and second-order teams are identified, each having it own unique structure, composition, communications, goals, and challenges. Several specific teams are highlighted as case examples. Limitations of this approach are discussed, as are potential benefits to ongoing and planned research activity in this area.

  17. Time-dependent Second Order Scattering Theory for Weather Radar with a Finite Beam Width

    NASA Technical Reports Server (NTRS)

    Kobayashi, Satoru; Tanelli, Simone; Im, Eastwood; Ito, Shigeo; Oguchi, Tomohiro

    2006-01-01

    Multiple scattering effects from spherical water particles of uniform diameter are studied for a W-band pulsed radar. The Gaussian transverse beam-profile and the rectangular pulse-duration are used for calculation. An second-order analytical solution is derived for a single layer structure, based on a time-dependent radiative transfer theory as described in the authors' companion paper. When the range resolution is fixed, increase in footprint radius leads to increase in the second order reflectivity that is defined as the ratio of the second order return to the first order one. This feature becomes more serious as the range increases. Since the spaceborne millimeter-wavelength radar has a large footprint radius that is competitive to the mean free path, the multiple scattering effect must be taken into account for analysis.

  18. Control order and visuomotor strategy development for joystick-steered underground shuttle cars.

    PubMed

    Cloete, Steven; Zupanc, Christine; Burgess-Limerick, Robin; Wallis, Guy

    2014-09-01

    In this simulator-based study, we aimed to quantify performance differences between joystick steering systems using first-order and second-order control, which are used in underground coal mining shuttle cars. In addition, we conducted an exploratory analysis of how users of the more difficult, second-order system changed their behavior over time. Evidence from the visuomotor control literature suggests that higher-order control devices are not intuitive, which could pose a significant risk to underground mine personnel, equipment, and infrastructure. Thirty-six naive participants were randomly assigned to first- and second-order conditions and completed three experimental trials comprising sequences of 90 degrees turns in a virtual underground mine environment, with velocity held constant at 9 km/h(-1). Performance measures were lateral deviation, steering angle variability, high-frequency steering content, joystick activity, and cumulative time in collision with the virtual mine wall. The second-order control group exhibited significantly poorer performance for all outcome measures. In addition, a series of correlation analyses revealed that changes in strategy were evident in the second-order group but not the first-order group. Results were consistent with previous literature indicating poorer performance with higher-order control devices and caution against the adoption of the second-order joystick system for underground shuttle cars. Low-cost, portable simulation platforms may provide an effective basis for operator training and recruitment.

  19. Developing SNMS for Full-Spectrum High-Sensitivity In-Situ Isotopic Analysis of Individual Comet Grains Collected by Stardust?

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Yen; Shen, Jason Jiun-San; Lee, Typhoon; Calaway, Wallis; Veryovkin, Igor; Moore, Jerry; Pellin, Michael

    2005-01-01

    In anticipation of the return of comet (and ISM?) dust grains by the Stardust mission [1] in mid-January next year, Academia Sinica (AS) and Argonne National Laboratory (ANL) have entered into a collaboration to develop instrument and method for the isotopic analysis of these samples. We need to achieve the highest possible sensitivity so that we can analyze individual grains one at a time to the smallest possible size. Only by doing so can we hope to reach one of the main science goals of the mission, namely the recognition of those isotopically distinct grains each carrying the characteristic signature of a particular nucleosynthetic stage of its parent star. In order to facilitate the interpretation of these grains the second requirement of our method is that the measurements must be made over the widest possible mass range before samples exhaustion. For instance, the thermonuclear fusion reactions that produced the isotopes of various major elements of a wide mass range required drastically different temperatures. Therefore their abundances could constrain the conditions at greatly varying depth inside the source star hence its structure and evolution.

  20. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  1. Validation and Sensitivity Analysis of a New Atmosphere-Soil-Vegetation Model.

    NASA Astrophysics Data System (ADS)

    Nagai, Haruyasu

    2002-02-01

    This paper describes details, validation, and sensitivity analysis of a new atmosphere-soil-vegetation model. The model consists of one-dimensional multilayer submodels for atmosphere, soil, and vegetation and radiation schemes for the transmission of solar and longwave radiations in canopy. The atmosphere submodel solves prognostic equations for horizontal wind components, potential temperature, specific humidity, fog water, and turbulence statistics by using a second-order closure model. The soil submodel calculates the transport of heat, liquid water, and water vapor. The vegetation submodel evaluates the heat and water budget on leaf surface and the downward liquid water flux. The model performance was tested by using measured data of the Cooperative Atmosphere-Surface Exchange Study (CASES). Calculated ground surface fluxes were mainly compared with observations at a winter wheat field, concerning the diurnal variation and change in 32 days of the first CASES field program in 1997, CASES-97. The measured surface fluxes did not satisfy the energy balance, so sensible and latent heat fluxes obtained by the eddy correlation method were corrected. By using options of the solar radiation scheme, which addresses the effect of the direct solar radiation component, calculated albedo agreed well with the observations. Some sensitivity analyses were also done for model settings. Model calculations of surface fluxes and surface temperature were in good agreement with measurements as a whole.

  2. Beta systems error analysis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  3. Investigation of air transportation technology at Ohio University, 1980. [general aviation aircraft and navigation aids

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. H.

    1981-01-01

    Specific configurations of first and second order all digital phase locked loops were analyzed for both ideal and additive gaussian noise inputs. In addition, a design for a hardware digital phase locked loop capable of either first or second order operation was evaluated along with appropriate experimental data obtained from testing of the hardware loop. All parameters chosen for the analysis and the design of the digital phase locked loop were consistent with an application to an Omega navigation receiver although neither the analysis nor the design are limited to this application. For all cases tested, the experimental data showed close agreement with the analytical results indicating that the Markov chain model for first and second order digital phase locked loops are valid.

  4. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  5. Second-order shaped pulsed for solid-state quantum computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Pinaki

    2008-01-01

    We present the construction and detailed analysis of highly optimized self-refocusing pulse shapes for several rotation angles. We characterize the constructed pulses by the coefficients appearing in the Magnus expansion up to second order. This allows a semianalytical analysis of the performance of the constructed shapes in sequences and composite pulses by computing the corresponding leading-order error operators. Higher orders can be analyzed with the numerical technique suggested by us previously. We illustrate the technique by analyzing several composite pulses designed to protect against pulse amplitude errors, and on decoupling sequences for potentially long chains of qubits with on-site andmore » nearest-neighbor couplings.« less

  6. Constituent order and semantic parallelism in online comprehension: eye-tracking evidence from German.

    PubMed

    Knoeferle, Pia; Crocker, Matthew W

    2009-12-01

    Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.

  7. A new order splitting model with stochastic lead times for deterioration items

    NASA Astrophysics Data System (ADS)

    Sazvar, Zeinab; Akbari Jokar, Mohammad Reza; Baboli, Armand

    2014-09-01

    In unreliable supply environments, the strategy of pooling lead time risks by splitting replenishment orders among multiple suppliers simultaneously is an attractive sourcing policy that has captured the attention of academic researchers and corporate managers alike. While various assumptions are considered in the models developed, researchers tend to overlook an important inventory category in order splitting models: deteriorating items. In this paper, we study an order splitting policy for a retailer that sells a deteriorating product. The inventory system is modelled as a continuous review system (s, Q) under stochastic lead time. Demand rate per unit time is assumed to be constant over an infinite planning horizon and shortages are backordered completely. We develop two inventory models. In the first model, it is assumed that all the requirements are supplied by only one source, whereas in the second, two suppliers are available. We use sensitivity analysis to determine the situations in which each sourcing policy is the most economic. We then study a real case from the European pharmaceutical industry to demonstrate the applicability and effectiveness of the proposed models. Finally, more promising directions are suggested for future research.

  8. Transient analysis of an adaptive system for optimization of design parameters

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    Averaging methods are applied to analyzing and optimizing the transient response associated with the direct adaptive control of an oscillatory second-order minimum-phase system. The analytical design methods developed for a second-order plant can be applied with some approximation to a MIMO flexible structure having a single dominant mode.

  9. Generalized Second-Order Partial Derivatives of 1/r

    ERIC Educational Resources Information Center

    Hnizdo, V.

    2011-01-01

    The generalized second-order partial derivatives of 1/r, where r is the radial distance in three dimensions (3D), are obtained using a result of the potential theory of classical analysis. Some non-spherical-regularization alternatives to the standard spherical-regularization expression for the derivatives are derived. The utility of a…

  10. The New Epistemology and the Milan Approach: Feminist and Sociopolitical Considerations.

    ERIC Educational Resources Information Center

    MacKinnon, Laurie Katherine; Miller, Dusty

    1987-01-01

    Explores the sociopolitical implications of the new epistemology and the Milan approach, concluding that, while second order cybernetics has greater potential to incorporate a radical social analysis, it has, nevertheless, failed to do so. The application of second order cybernetics in family therapy appears to be constrained by the sociopolitical…

  11. Thin film atomic hydrogen detectors

    NASA Technical Reports Server (NTRS)

    Gruber, C. L.

    1977-01-01

    Thin film and bead thermistor atomic surface recombination hydrogen detectors were investigated both experimentally and theoretically. Devices were constructed on a thin Mylar film substrate. Using suitable Wheatstone bridge techniques sensitivities of 80 microvolts/2x10 to the 13th power atoms/sec are attainable with response time constants on the order of 5 seconds.

  12. Arab Students in the U.S.: Learning Language, Teaching Friendship.

    ERIC Educational Resources Information Center

    Kwilinski, Paul

    1998-01-01

    Discusses the many issues faced by Arab students studying English in U.S. English-as-a-Second-Language classrooms, explaining hurdles they face due to cultural differences, describing the cultural sensitivity that school staff must develop in order to best serve Arab students, and investigating challenges that may occur in classrooms that include…

  13. Standing Vs Supine; Does it Matter in Cough Stress Testing?

    PubMed

    Patnam, Radhika; Edenfield, Autumn L; Swift, Steven E

    The aim of this study was to compare the sensitivity of cough stress test in the standing versus supine position in the evaluation of incontinent females. We performed a prospective observational study of women with the chief complaint of urinary incontinence (UI) undergoing a provocative cough stress test (CST). Subjects underwent both a standing and a supine CST. Testing order was randomized via block randomization. Cough stress test was performed in a standard method via backfill of 200 mL or until the subject described strong urge. The subjects were asked to cough, and the physician documented urine leakage by direct observation. The gold standard for stress UI diagnosis was a positive CST in either position. Sixty subjects were enrolled, 38 (63%) tested positive on any CST, with 38 (63%) positive on standing compared with 29 (28%) positive on supine testing. Nine women (15%) had positive standing and negative supine testing. No subjects had negative standing with positive supine testing. There were no significant differences in positive tests between the 2 randomized groups (standing first and supine second vs. supine first and standing second). When compared with the gold standard of any positive provocative stress test, the supine CST has a sensitivity of 76%, whereas the standing CST has a sensitivity of 100%. The standing CST is more sensitive than the supine CST and should be performed in any patient with a complaint of UI and negative supine CST. The order of testing either supine or standing first does not affect the results.

  14. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  15. Leadership: validation of a self-report scale.

    PubMed

    Dussault, Marc; Frenette, Eric; Fernet, Claude

    2013-04-01

    The aim of this paper was to propose and test the factor structure of a new self-report questionnaire on leadership. A sample of 373 school principals in the Province of Quebec, Canada completed the initial 46-item version of the questionnaire. In order to obtain a questionnaire of minimal length, a four-step procedure was retained. First, items analysis was performed using Classical Test Theory. Second, Rasch analysis was used to identify non-fitting or overlapping items. Third, a confirmatory factor analysis (CFA) using structural equation modelling was performed on the 21 remaining items to verify the factor structure of the scale. Results show that the model with a single third-order dimension (leadership), two second-order dimensions (transactional and transformational leadership), and one first-order dimension (laissez-faire leadership) provides a good fit to the data. Finally, invariance of factor structure was assessed with a second sample of 222 vice-principals in the Province of Quebec, Canada. This model is in agreement with the theoretical model developed by Bass (1985), upon which the questionnaire is based.

  16. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  17. DHA-fluorescent probe is sensitive to membrane order and reveals molecular adaptation of DHA in ordered lipid microdomains☆

    PubMed Central

    Teague, Heather; Ross, Ron; Harris, Mitchel; Mitchell, Drake C.; Shaikh, Saame Raza

    2012-01-01

    Docosahexaenoic acid (DHA) disrupts the size and order of plasma membrane lipid microdomains in vitro and in vivo. However, it is unknown how the highly disordered structure of DHA mechanistically adapts to increase the order of tightly packed lipid microdomains. Therefore, we studied a novel DHA-Bodipy fluorescent probe to address this issue. We first determined if the DHA-Bodipy probe localized to the plasma membrane of primary B and immortal EL4 cells. Image analysis revealed that DHA-Bodipy localized into the plasma membrane of primary B cells more efficiently than EL4 cells. We then determined if the probe detected changes in plasma membrane order. Quantitative analysis of time-lapse movies established that DHA-Bodipy was sensitive to membrane molecular order. This allowed us to investigate how DHA-Bodipy physically adapted to ordered lipid microdomains. To accomplish this, we employed steady-state and time-resolved fluorescence anisotropy measurements in lipid vesicles of varying composition. Similar to cell culture studies, the probe was highly sensitive to membrane order in lipid vesicles. Moreover, these experiments revealed, relative to controls, that upon incorporation into highly ordered microdomains, DHA-Bodipy underwent an increase in its fluorescence lifetime and molecular order. In addition, the probe displayed a significant reduction in its rotational diffusion compared to controls. Altogether, DHA-Bodipy was highly sensitive to membrane order and revealed for the first time that DHA, despite its flexibility, could become ordered with less rotational motion inside ordered lipid microdomains. Mechanistically, this explains how DHA acyl chains can increase order upon formation of lipid microdomains in vivo. PMID:22841541

  18. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  19. Ultrasensitive microfluidic solid-phase ELISA using an actuatable microwell-patterned PDMS chip.

    PubMed

    Wang, Tanyu; Zhang, Mohan; Dreher, Dakota D; Zeng, Yong

    2013-11-07

    Quantitative detection of low abundance proteins is of significant interest for biological and clinical applications. Here we report an integrated microfluidic solid-phase ELISA platform for rapid and ultrasensitive detection of proteins with a wide dynamic range. Compared to the existing microfluidic devices that perform affinity capture and enzyme-based optical detection in a constant channel volume, the key novelty of our design is two-fold. First, our system integrates a microwell-patterned assay chamber that can be pneumatically actuated to significantly reduce the volume of chemifluorescent reaction, markedly improving the sensitivity and speed of ELISA. Second, monolithic integration of on-chip pumps and the actuatable assay chamber allow programmable fluid delivery and effective mixing for rapid and sensitive immunoassays. Ultrasensitive microfluidic ELISA was demonstrated for insulin-like growth factor 1 receptor (IGF-1R) across at least five orders of magnitude with an extremely low detection limit of 21.8 aM. The microwell-based solid-phase ELISA strategy provides an expandable platform for developing the next-generation microfluidic immunoassay systems that integrate and automate digital and analog measurements to further improve the sensitivity, dynamic ranges, and reproducibility of proteomic analysis.

  20. Gas chromatography coupled to tunable pulsed glow discharge time-of-flight mass spectrometry for environmental analysis.

    PubMed

    Solà-Vázquez, Auristela; Lara-Gonzalo, Azucena; Costa-Fernández, José M; Pereiro, Rosario; Sanz-Medel, Alfredo

    2010-05-01

    A tuneable microsecond pulsed direct current glow discharge (GD)-time-of-flight mass spectrometer MS(TOF) developed in our laboratory was coupled to a gas chromatograph (GC) to obtain sequential collection of the mass spectra, at different temporal regimes occurring in the GD pulses, during elution of the analytes. The capabilities of this set-up were explored using a mixture of volatile organic compounds of environmental concern: BrClCH, Cl(3)CH, Cl(4)C, BrCl(2)CH, Br(2)ClCH, Br(3)CH. The experimental parameters of the GC-pulsed GD-MS(TOF) prototype were optimized in order to separate appropriately and analyze the six selected organic compounds, and two GC carrier gases, helium and nitrogen, were evaluated. Mass spectra for all analytes were obtained in the prepeak, plateau and afterpeak temporal regimes of the pulsed GD. Results showed that helium offered the best elemental sensitivity, while nitrogen provided higher signal intensities for fragments and molecular peaks. The analytical performance characteristics were also worked out for each analyte. Absolute detection limits obtained were in the order of ng. In a second step, headspace solid phase microextraction (HS SPME), as sample preparation and preconcentration technique, was evaluated for the quantification of the compounds under study, in order to achieve the required analytical sensitivity for trihalomethanes European Union (EU) environmental legislation. The analytical figures of merit obtained using the proposed methodology showed rather good detection limits (between 2 and 13 microg L(-1) depending on the analyte). In fact, the developed methodology met the EU legislation requirements (the maximum level permitted in tap water for the "total trihalomethanes" is set at 100 microg L(-1)). Real analysis of drinking water and river water were successfully carried out. To our knowledge this is the first application of GC-pulsed GD-MS(TOF) for the analysis of real samples. Its ability to provide elemental, fragments and molecular information of the organic compounds is demonstrated.

  1. Glucose dispersion measurement using white-light LCI

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Bagherzadeh, Morteza; Hitzenberger, Christoph K.; Pircher, Michael; Zawadzki, Robert; Fercher, Adolf F.

    2003-07-01

    We measured second order dispersion of glucose solution using a Michelson Low Coherent Interferometer (LCI). Three different glucose concentrations: 20mg/dl (hypoglycemia), 100mg/dl (normal level), and 500mg/dl (hyperglycemia) are investigated over the wavelength range 0.5μm to 0.85μm, and the investigation shows that different concentrations are associated with different second-order dispersions. The second-order dispersions for wavelengths from 0.55μm to 0.8μm are determined by Fourier analysis of the interferogram. This approach can be applied to measure the second-order dispersion for distinguishing the different glucose concentrations. It can be considered as a potentially noninvasive method to determine glucose concentration in human eye. A brief discussion is presented in this poster as well.

  2. Multistability of second-order competitive neural networks with nondecreasing saturated activation functions.

    PubMed

    Nie, Xiaobing; Cao, Jinde

    2011-11-01

    In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.

  3. The effect of carbon concentration and plastic deformation on ultrasonic higher order elastic properties of steel

    NASA Technical Reports Server (NTRS)

    Heyman, J. S.; Allison, S. G.; Salama, K.

    1985-01-01

    The behavior of higher order elastic properties, which are much more sensitive to material state than are second order properties, has been studied for steel alloys AISI 1016, 1045, 1095, and 8620 by measuring the stress derivative of the acoustic natural velocity to determine the stress acoustic constants (SAC's). Results of these tests show a 20 percent linear variation of SAC's with carbon content as well as even larger variations with prestrain (plastic deformation). The use of higher order elastic characterization permits quantitative evaluation of solids and may prove useful in studies of fatigue and fracture.

  4. Application of ultrasonic signature analysis for fatigue detection in complex structures

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J.

    1974-01-01

    Ultrasonic signature analysis shows promise of being a singularly well-suited method for detecting fatigue in structures as complex as aircraft. The method employs instrumentation centered about a Fourier analyzer system, which features analog-to-digital conversion, digital data processing, and digital display of cross-correlation functions and cross-spectra. These features are essential to the analysis of ultrasonic signatures according to the procedure described here. In order to establish the feasibility of the method, the initial experiments were confined to simple plates with simulated and fatigue-induced defects respectively. In the first test the signature proved sensitive to the size of a small hole drilled into the plate. In the second test, performed on a series of fatigue-loaded plates, the signature proved capable of indicating both the initial appearance and subsequent growth of a fatigue crack. In view of these encouraging results it is concluded that the method has reached a sufficiently advanced stage of development to warrant application to small-scale structures or even actual aircraft.

  5. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  6. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  7. Millimeter wave micro-CPW integrated antenna

    NASA Astrophysics Data System (ADS)

    Tzuang, Ching-Kuang C.; Lin, Ching-Chyuan

    1996-12-01

    This paper presents the latest result of applying the microstrip's leaky mode for a millimeter-wave active integrated antenna design. In contrast to the use of the first higher-order leaky mode, the second higher-order leaky mode, the second higher-order leaky mode of even symmetry is employed in the new approach, which allows larger dimension for leaky-wave antenna design and thereby reduces its performance sensitivity to the photolithographic tolerance. The new active integrated antenna operating at frequency about 34 GHz comprises of a microstrip and a coplanar waveguide stacked on top of each other, named as the millimeter wave micro-CPW integrated antenna. The feed is through the CPW that would be connected to the active uniplanar millimeter-wave (M)MIC's. Our experimental and theoretical investigations on the new integrated antenna show good input matching characteristics for such a highly directed leaky-wave antenna with the first-pass success.

  8. A Very Much Faster and More Sensitive In Situ Stable Isotope Analysis Instrument

    NASA Astrophysics Data System (ADS)

    Coleman, M.; Christensen, L. E.; Kriesel, J. M.; Kelly, J. F.; Moran, J. J.; Vance, S.

    2016-10-01

    We are developing, Capillary Absorption Spectrometry (CAS) for H and O stable isotope analyses, giving > 4 orders of magnitude improved sensitivity, allowing analysis of 5 nano-moles of water and coupled to laser sampling to free water from hydrated minerals and ice.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henning, C.

    This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.

  10. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    DTIC Science & Technology

    2016-12-01

    support both mode-based structural representations and time-dependent, nonlinear finite element structural dynamics. This interim report describes...Adaptation, & Sensitivity Toolkit • Elasticity, heat transfer, & compressible flow • Adjoint solver for sensitivity analysis • High-order finite elements ...PROGRAM ELEMENT NUMBER 62201F 6. AUTHOR(S) Richard D. Snyder 5d. PROJECT NUMBER 2401 5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER Q1FS 7

  11. Ultrafast studies of the excited-state dynamics of copper and nickel phthalocyanine tetrasulfonates: potential sensitizers for the two-photon photodynamic therapy of tumors.

    PubMed

    Fournier, Michel; Pépin, Claude; Houde, Daniel; Ouellet, René; van Lier, Johan E

    2004-01-01

    In order to evaluate the potential of copper and nickel phthalocyanine tetrasulfonates as sensitizers for two-photon photodynamic therapy, we conducted kinetic femtosecond measurements of transient absorption and bleaching of their excited state dynamics in aqueous solution. Samples were pumped with 620 nm and 310 nm laser light, which allowed us to study relaxation processes from both the first and second singlet (or doublet for the copper phthalocyanine) excited states. A second excitation from the first excited triplet state, approximately 685 and 105 ps after the first excitation for copper and nickel phthalocyanine tetrasulfonate respectively, was the most efficient way to bring the molecules to an upper triplet state. Presumably this highest triplet state can inflict molecular damage on adjacent biomolecules int eh absence of oxygen, resulting in the desired cytotoxic cellular response. Transient absorption spectra at different fixed delays indicate that optimum efficiency would require that the second photon has a wavelength of approximately 750 nm.

  12. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628

  13. Website Analysis as a Tool for Task-Based Language Learning and Higher Order Thinking in an EFL Context

    ERIC Educational Resources Information Center

    Roy, Debopriyo

    2014-01-01

    Besides focusing on grammar, writing skills, and web-based language learning, researchers in "CALL" and second language acquisition have also argued for the importance of promoting higher-order thinking skills in ESL (English as Second Language) and EFL (English as Foreign Language) classrooms. There is solid evidence supporting the…

  14. Reductions of topologically massive gravity I: Hamiltonian analysis of second order degenerate Lagrangians

    NASA Astrophysics Data System (ADS)

    Ćaǧatay Uçgun, Filiz; Esen, Oǧul; Gümral, Hasan

    2018-01-01

    We present Skinner-Rusk and Hamiltonian formalisms of second order degenerate Clément and Sarıoğlu-Tekin Lagrangians. The Dirac-Bergmann constraint algorithm is employed to obtain Hamiltonian realizations of Lagrangian theories. The Gotay-Nester-Hinds algorithm is used to investigate Skinner-Rusk formalisms of these systems.

  15. Stability and stabilization of the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Brownlee, R. A.; Gorban, A. N.; Levesley, J.

    2007-03-01

    We revisit the classical stability versus accuracy dilemma for the lattice Boltzmann methods (LBM). Our goal is a stable method of second-order accuracy for fluid dynamics based on the lattice Bhatnager-Gross-Krook method (LBGK). The LBGK scheme can be recognized as a discrete dynamical system generated by free flight and entropic involution. In this framework the stability and accuracy analysis are more natural. We find the necessary and sufficient conditions for second-order accurate fluid dynamics modeling. In particular, it is proven that in order to guarantee second-order accuracy the distribution should belong to a distinguished surface—the invariant film (up to second order in the time step). This surface is the trajectory of the (quasi)equilibrium distribution surface under free flight. The main instability mechanisms are identified. The simplest recipes for stabilization add no artificial dissipation (up to second order) and provide second-order accuracy of the method. Two other prescriptions add some artificial dissipation locally and prevent the system from loss of positivity and local blowup. Demonstration of the proposed stable LBGK schemes are provided by the numerical simulation of a one-dimensional (1D) shock tube and the unsteady 2D flow around a square cylinder up to Reynolds number Rẽ20000 .

  16. A nonlinear macromodel of the bipolar integrated circuit operational amplifier for electromagnetic interference analysis

    NASA Astrophysics Data System (ADS)

    Chen, G. K. C.

    1981-06-01

    A nonlinear macromodel for the bipolar transistor integrated circuit operational amplifier is derived from the macromodel proposed by Boyle. The nonlinear macromodel contains only two nonlinear transistors in the input stage in a differential amplifier configuration. Parasitic capacitance effects are represented by capacitors placed at the collectors and emitters of the input transistors. The nonlinear macromodel is effective in predicting the second order intermodulation effect of operational amplifiers in a unity gain buffer amplifier configuration. The nonlinear analysis computer program NCAP is used for the analysis. Accurate prediction of demodulation of amplitude modulated RF signals with RF carrier frequencies in the 0.05 to 100 MHz range is achieved. The macromodel predicted results, presented in the form of second order nonlinear transfer function, come to within 6 dB of the full model predictions for the 741 type of operational amplifiers for values of the second order transfer function greater than -40 dB.

  17. The need for higher-order averaging in the stability analysis of hovering, flapping-wing flight.

    PubMed

    Taha, Haithem E; Tahmasian, Sevak; Woolsey, Craig A; Nayfeh, Ali H; Hajj, Muhammad R

    2015-01-05

    Because of the relatively high flapping frequency associated with hovering insects and flapping wing micro-air vehicles (FWMAVs), dynamic stability analysis typically involves direct averaging of the time-periodic dynamics over a flapping cycle. However, direct application of the averaging theorem may lead to false conclusions about the dynamics and stability of hovering insects and FWMAVs. Higher-order averaging techniques may be needed to understand the dynamics of flapping wing flight and to analyze its stability. We use second-order averaging to analyze the hovering dynamics of five insects in response to high-amplitude, high-frequency, periodic wing motion. We discuss the applicability of direct averaging versus second-order averaging for these insects.

  18. Spectral Sensitivity of the ctenid spider Cupiennius salei Keys

    PubMed Central

    Zopf, Lydia M.; Schmid, Axel; Fredman, David; Eriksson, Bo Joakim

    2014-01-01

    Summary The spectral sensitivity of adult male Cupiennius salei Keys, a nocturnal hunting spider, was studied in a behavioural test. As known from earlier behavioural tests, C. salei walks towards a black target presented in front of a white background. In this study a black target (size 42 × 70 cm) was presented in a white arena illuminated by monochromatic light in the range of 365 to 695 nm using 19 monochromatic filters (HW in the range of 6 – 10 nm). In the first trial, the transmission of the optical filters was between 40 % and 80%. In a second trial the transmission was reduced to 5%, using a neutral density filter. At the high intensity the spiders showed a spectral sensivity in the range from 380 to 670 nm. In the second trial the animals only showed directed walks if the illumination was in the range of 449 to 599 nm, indicating a lower sensitivity at the margins of the spectral sensitivity. In previous intracellular recordings, the measured spectral sensitivity was between 320 and 620 nm. Interestingly, these results do not completely match the behaviourally tested spectral sensitivity of the photoreceptors, where the sensitivity range is shifted to longer wavelengths. In order to investigate the molecular background of spectral sensitivity, we searched for opsin genes in C. salei. We found three visual opsins that correspond to UV and middle to long wavelength sensitive opsins as described for jumping spiders. PMID:23948480

  19. Texture analysis of apparent diffusion coefficient maps for treatment response assessment in prostate cancer bone metastases-A pilot study.

    PubMed

    Reischauer, Carolin; Patzwahl, René; Koh, Dow-Mu; Froehlich, Johannes M; Gutzeit, Andreas

    2018-04-01

    To evaluate whole-lesion volumetric texture analysis of apparent diffusion coefficient (ADC) maps for assessing treatment response in prostate cancer bone metastases. Texture analysis is performed in 12 treatment-naïve patients with 34 metastases before treatment and at one, two, and three months after the initiation of androgen deprivation therapy. Four first-order and 19 second-order statistical texture features are computed on the ADC maps in each lesion at every time point. Repeatability, inter-patient variability, and changes in the feature values under therapy are investigated. Spearman rank's correlation coefficients are calculated across time to demonstrate the relationship between the texture features and the serum prostate specific antigen (PSA) levels. With few exceptions, the texture features exhibited moderate to high precision. At the same time, Friedman's tests revealed that all first-order and second-order statistical texture features changed significantly in response to therapy. Thereby, the majority of texture features showed significant changes in their values at all post-treatment time points relative to baseline. Bivariate analysis detected significant correlations between the great majority of texture features and the serum PSA levels. Thereby, three first-order and six second-order statistical features showed strong correlations with the serum PSA levels across time. The findings in the present work indicate that whole-tumor volumetric texture analysis may be utilized for response assessment in prostate cancer bone metastases. The approach may be used as a complementary measure for treatment monitoring in conjunction with averaged ADC values. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Sensitivity towards fear of electric shock in passive threat situations.

    PubMed

    Ring, Patrick; Kaernbach, Christian

    2015-01-01

    Human judgment and decision-making (JDM) requires an assessment of different choice options. While traditional theories of choice argue that cognitive processes are the main driver to reach a decision, growing evidence highlights the importance of emotion in decision-making. Following these findings, it appears relevant to understand how individuals asses the attractiveness or riskiness of a situation in terms of emotional processes. The following study aims at a better understanding of the psychophysiological mechanisms underlying threat sensitivity by measuring skin conductance responses (SCRs) in passive threat situations. While previous studies demonstrate the role of magnitude on emotional body reactions preceding an outcome, this study focuses on probability. In order to analyze emotional body reactions preceding negative events with varying probability of occurrence, we have our participants play a two-stage card game. The first stage of the card game reveals the probability of receiving an unpleasant electric shock. The second stage applies the electric shock with the previously announced probability. For the analysis, we focus on the time interval between the first and second stage. We observe a linear relation between SCRs in anticipation of receiving an electric shock and shock probability. This finding indicates that SCRs are able to code the likelihood of negative events. We outline how this coding function of SCRs during the anticipation of negative events might add to an understanding of human JDM.

  1. Quantitative analysis of Earth's field NMR spectra of strongly-coupled heteronuclear systems.

    PubMed

    Halse, Meghan E; Callaghan, Paul T; Feland, Brett C; Wasylishen, Roderick E

    2009-09-01

    In the Earth's magnetic field, it is possible to observe spin systems consisting of unlike spins that exhibit strongly coupled second-order NMR spectra. Such spectra result when the J-coupling between two unlike spins is of the same order of magnitude as the difference in their Larmor precession frequencies. Although the analysis of second-order spectra involving only spin-(1/2) nuclei has been discussed since the early days of NMR spectroscopy, NMR spectra involving spin-(1/2) nuclei and quadrupolar (I>(1/2)) nuclei have rarely been treated. Two examples are presented here, the tetrahydroborate anion, BH4-, and the ammonium cation, NH4+. For the tetrahydroborate anion, (1)J((11)B,(1)H)=80.9Hz, and in an Earth's field of 53.3microT, nu((1)H)=2269Hz and nu((11)B)=728Hz. The (1)H NMR spectra exhibit features that both first- and second-order perturbation theory are unable to reproduce. On the other hand, second-order perturbation theory adequately describes (1)H NMR spectra of the ammonium anion, (14)NH4+, where (1)J((14)N,(1)H)=52.75Hz when nu((1)H)=2269Hz and nu((14)N)=164Hz. Contrary to an early report, we find that the (1)H NMR spectra are independent of the sign of (1)J((14)N,(1)H). Exact analysis of two-spin systems consisting of quadrupolar nuclei and spin-(1/2) nuclei are also discussed.

  2. Cost-effectiveness analysis of fecal microbiota transplantation for recurrent Clostridium difficile infection.

    PubMed

    Varier, Raghu U; Biltaji, Eman; Smith, Kenneth J; Roberts, Mark S; Kyle Jensen, M; LaFleur, Joanne; Nelson, Richard E

    2015-04-01

    Clostridium difficile infection (CDI) places a high burden on the US healthcare system. Recurrent CDI (RCDI) occurs frequently. Recently proposed guidelines from the American College of Gastroenterology (ACG) and the American Gastroenterology Association (AGA) include fecal microbiota transplantation (FMT) as a therapeutic option for RCDI. The purpose of this study was to estimate the cost-effectiveness of FMT compared with vancomycin for the treatment of RCDI in adults, specifically following guidelines proposed by the ACG and AGA. We constructed a decision-analytic computer simulation using inputs from the published literature to compare the standard approach using tapered vancomycin to FMT for RCDI from the third-party payer perspective. Our effectiveness measure was quality-adjusted life years (QALYs). Because simulated patients were followed for 90 days, discounting was not necessary. One-way and probabilistic sensitivity analyses were performed. Base-case analysis showed that FMT was less costly ($1,669 vs $3,788) and more effective (0.242 QALYs vs 0.235 QALYs) than vancomycin for RCDI. One-way sensitivity analyses showed that FMT was the dominant strategy (both less expensive and more effective) if cure rates for FMT and vancomycin were ≥70% and <91%, respectively, and if the cost of FMT was <$3,206. Probabilistic sensitivity analysis, varying all parameters simultaneously, showed that FMT was the dominant strategy over 10, 000 second-order Monte Carlo simulations. Our results suggest that FMT may be a cost-saving intervention in managing RCDI. Implementation of FMT for RCDI may help decrease the economic burden to the healthcare system.

  3. Optimizing radiologist e-prescribing of CT oral contrast agent using a protocoling portal.

    PubMed

    Wasser, Elliot J; Galante, Nicholas J; Andriole, Katherine P; Farkas, Cameron; Khorasani, Ramin

    2013-12-01

    The purpose of this study is to quantify the time expenditure associated with radiologist ordering of CT oral contrast media when using an integrated protocoling portal and to determine radiologists' perceptions of the ordering process. This prospective study was performed at a large academic tertiary care facility. Detailed timing information for CT inpatient oral contrast orders placed via the computerized physician order entry (CPOE) system was gathered over a 14-day period. Analyses evaluated the amount of physician time required for each component of the ordering process. Radiologists' perceptions of the ordering process were assessed by survey. Descriptive statistics and chi-square analysis were performed. A total of 96 oral contrast agent orders were placed by 13 radiologists during the study period. The average time necessary to create a protocol for each case was 40.4 seconds (average range by subject, 20.0-130.0 seconds; SD, 37.1 seconds), and the average total time to create and sign each contrast agent order was 27.2 seconds (range, 10.0-50.0 seconds; SD, 22.4 seconds). Overall, 52.5% (21/40) of survey respondents indicated that radiologist entry of oral contrast agent orders improved patient safety. A minority of respondents (15% [6/40]) indicated that contrast agent order entry was either very or extremely disruptive to workflow. Radiologist e-prescribing of CT oral contrast agents using CPOE can be embedded in a protocol workflow. Integration of health IT tools can help to optimize user acceptance and adoption.

  4. Application of Hermitian time-dependent coupled-cluster response Ansätze of second order to excitation energies and frequency-dependent dipole polarizabilities

    NASA Astrophysics Data System (ADS)

    Wälz, Gero; Kats, Daniel; Usvyat, Denis; Korona, Tatiana; Schütz, Martin

    2012-11-01

    Linear-response methods, based on the time-dependent variational coupled-cluster or the unitary coupled-cluster model, and truncated at the second order according to the Møller-Plesset partitioning, i.e., the TD-VCC[2] and TD-UCC[2] linear-response methods, are presented and compared. For both of these methods a Hermitian eigenvalue problem has to be solved to obtain excitation energies and state eigenvectors. The excitation energies thus are guaranteed always to be real valued, and the eigenvectors are mutually orthogonal, in contrast to response theories based on “traditional” coupled-cluster models. It turned out that the TD-UCC[2] working equations for excitation energies and polarizabilities are equivalent to those of the second-order algebraic diagrammatic construction scheme ADC(2). Numerical tests are carried out by calculating TD-VCC[2] and TD-UCC[2] excitation energies and frequency-dependent dipole polarizabilities for several test systems and by comparing them to the corresponding values obtained from other second- and higher-order methods. It turns out that the TD-VCC[2] polarizabilities in the frequency regions away from the poles are of a similar accuracy as for other second-order methods, as expected from the perturbative analysis of the TD-VCC[2] polarizability expression. On the other hand, the TD-VCC[2] excitation energies are systematically too low relative to other second-order methods (including TD-UCC[2]). On the basis of these results and an analysis presented in this work, we conjecture that the perturbative expansion of the Jacobian converges more slowly for the TD-VCC formalism than for TD-UCC or for response theories based on traditional coupled-cluster models.

  5. Efficient second harmonic generation by para-nitroaniline embedded in electro-spun polymeric nanofibres

    NASA Astrophysics Data System (ADS)

    Gonçalves, Hugo; Saavedra, Inês; Ferreira, Rute AS; Lopes, PE; de Matos Gomes, Etelvina; Belsley, Michael

    2018-03-01

    Intense well polarized second harmonic light was generated by poly(methyl methacrylate) nanofibres with embedded para-nitroaniline nanocrystals. Subwavelength diameter fibres were electro-spun using a 1:2 weight ratio of chromophore to polymer. Analysis of the generated second harmonic light indicates that the para-nitroaniline molecules, which nominally crystalize in the centrosymmetric space group, were organized into noncentrosymmetric structures leading to a second order susceptibility dominated by a single tensor element. Under the best deposition conditions, the nanofibrers display an effective nonlinear optical susceptibility approximately two orders of magnitude greater than that of potassium dihydrogen phosphate. Generalizing this approach to a broad range of organic molecules with strong individual molecular second order nonlinear responses, but which nominally form centrosymmetric organic crystals, could open a new pathway for the fabrication of efficient sub-micron sized second harmonic light generators.

  6. Conventional Energy and Macronutrient Variables Distort the Accuracy of Children’s Dietary Reports: Illustrative Data from a Validation Study of Effect of Order Prompts

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conventional energy and macronutrient (protein, carbohydrate, fat) variables, which disregard accuracy of reported items and amounts, misrepresent reporting accuracy. Reporting-error-sensitive variables are proposed which classify reported items as matches or intrusions, and reported amounts as corresponding or overreported. Methods 58 girls and 63 boys were each observed eating school meals on 2 days separated by ≥4 weeks, and interviewed the morning after each observation day. One interview per child had forward-order (morning-to-evening) prompts; one had reverse-order prompts. Original food-item-level analyses found a sex-x-order prompt interaction for omission rates. Current analyses compared reference (observed) and reported information transformed to energy and macronutrients. Results Using conventional variables, reported amounts were less than reference amounts (ps<0.001; paired t-tests); report rates were higher for the first than second interview for energy, protein, and carbohydrate (ps≤0.049; mixed models). Using reporting-error-sensitive variables, correspondence rates were higher for girls with forward- but boys with reverse-order prompts (ps≤0.041; mixed models); inflation ratios were lower with reverse- than forward-order prompts for energy, carbohydrate, and fat (ps≤0.045; mixed models). Conclusions Conventional variables overestimated reporting accuracy and masked order prompt and sex effects. Reporting-error-sensitive variables are recommended when assessing accuracy for energy and macronutrients in validation studies. PMID:16959308

  7. A second-order closure analysis of turbulent diffusion flames. [combustion physics

    NASA Technical Reports Server (NTRS)

    Varma, A. K.; Fishburne, E. S.; Beddini, R. A.

    1977-01-01

    A complete second-order closure computer program for the investigation of compressible, turbulent, reacting shear layers was developed. The equations for the means and the second order correlations were derived from the time-averaged Navier-Stokes equations and contain third order and higher order correlations, which have to be modeled in terms of the lower-order correlations to close the system of equations. In addition to fluid mechanical turbulence models and parameters used in previous studies of a variety of incompressible and compressible shear flows, a number of additional scalar correlations were modeled for chemically reacting flows, and a typical eddy model developed for the joint probability density function for all the scalars. The program which is capable of handling multi-species, multistep chemical reactions, was used to calculate nonreacting and reacting flows in a hydrogen-air diffusion flame.

  8. A Comparison of Weights Matrices on Computation of Dengue Spatial Autocorrelation

    NASA Astrophysics Data System (ADS)

    Suryowati, K.; Bekti, R. D.; Faradila, A.

    2018-04-01

    Spatial autocorrelation is one of spatial analysis to identify patterns of relationship or correlation between locations. This method is very important to get information on the dispersal patterns characteristic of a region and linkages between locations. In this study, it applied on the incidence of Dengue Hemorrhagic Fever (DHF) in 17 sub districts in Sleman, Daerah Istimewa Yogyakarta Province. The link among location indicated by a spatial weight matrix. It describe the structure of neighbouring and reflects the spatial influence. According to the spatial data, type of weighting matrix can be divided into two types: point type (distance) and the neighbourhood area (contiguity). Selection weighting function is one determinant of the results of the spatial analysis. This study use queen contiguity based on first order neighbour weights, queen contiguity based on second order neighbour weights, and inverse distance weights. Queen contiguity first order and inverse distance weights shows that there is the significance spatial autocorrelation in DHF, but not by queen contiguity second order. Queen contiguity first and second order compute 68 and 86 neighbour list

  9. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  10. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  11. Nanoantenna harmonic sensor: theoretical analysis of contactless detection of molecules with light

    NASA Astrophysics Data System (ADS)

    Farhat, Mohamed; Cheng, Mark M. C.; Le, Khai Q.; Chen, Pai-Yen

    2015-10-01

    The nonlinear harmonic sensor is a popular wireless sensor and radiofrequency identification (RFID) technique, which allows high-performance sensing in a severe interference/clutter background by transmitting a radio wave and detecting its modulated higher-order harmonics. Here we introduce the concept and design of optical harmonic tags based on nonlinear nanoantennas that can contactlessly detect electronic (e.g. electron affinity) and optical (e.g. relative permittivity) characteristics of molecules. By using a dual-resonance gold-molecule-silver nanodipole antenna within the quantum mechanical realm, the spectral form of the second-harmonic scattering can sensitively reveal the physical properties of molecules, paving a new route towards optical molecular sensors and optical identification (OPID) of biological, genetic, and medical events for the ‘Internet of Nano-Things’.

  12. Nanoantenna harmonic sensor: theoretical analysis of contactless detection of molecules with light.

    PubMed

    Farhat, Mohamed; Cheng, Mark M C; Le, Khai Q; Chen, Pai-Yen

    2015-10-16

    The nonlinear harmonic sensor is a popular wireless sensor and radiofrequency identification (RFID) technique, which allows high-performance sensing in a severe interference/clutter background by transmitting a radio wave and detecting its modulated higher-order harmonics. Here we introduce the concept and design of optical harmonic tags based on nonlinear nanoantennas that can contactlessly detect electronic (e.g. electron affinity) and optical (e.g. relative permittivity) characteristics of molecules. By using a dual-resonance gold-molecule-silver nanodipole antenna within the quantum mechanical realm, the spectral form of the second-harmonic scattering can sensitively reveal the physical properties of molecules, paving a new route towards optical molecular sensors and optical identification (OPID) of biological, genetic, and medical events for the 'Internet of Nano-Things'.

  13. Complex-valued time-series correlation increases sensitivity in FMRI analysis.

    PubMed

    Kociuba, Mary C; Rowe, Daniel B

    2016-07-01

    To develop a linear matrix representation of correlation between complex-valued (CV) time-series in the temporal Fourier frequency domain, and demonstrate its increased sensitivity over correlation between magnitude-only (MO) time-series in functional MRI (fMRI) analysis. The standard in fMRI is to discard the phase before the statistical analysis of the data, despite evidence of task related change in the phase time-series. With a real-valued isomorphism representation of Fourier reconstruction, correlation is computed in the temporal frequency domain with CV time-series data, rather than with the standard of MO data. A MATLAB simulation compares the Fisher-z transform of MO and CV correlations for varying degrees of task related magnitude and phase amplitude change in the time-series. The increased sensitivity of the complex-valued Fourier representation of correlation is also demonstrated with experimental human data. Since the correlation description in the temporal frequency domain is represented as a summation of second order temporal frequencies, the correlation is easily divided into experimentally relevant frequency bands for each voxel's temporal frequency spectrum. The MO and CV correlations for the experimental human data are analyzed for four voxels of interest (VOIs) to show the framework with high and low contrast-to-noise ratios in the motor cortex and the supplementary motor cortex. The simulation demonstrates the increased strength of CV correlations over MO correlations for low magnitude contrast-to-noise time-series. In the experimental human data, the MO correlation maps are noisier than the CV maps, and it is more difficult to distinguish the motor cortex in the MO correlation maps after spatial processing. Including both magnitude and phase in the spatial correlation computations more accurately defines the correlated left and right motor cortices. Sensitivity in correlation analysis is important to preserve the signal of interest in fMRI data sets with high noise variance, and avoid excessive processing induced correlation. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Rethinking Pedagogy for Second-Order Differential Equations: A Simplified Approach to Understanding Well-Posed Problems

    ERIC Educational Resources Information Center

    Tisdell, Christopher C.

    2017-01-01

    Knowing an equation has a unique solution is important from both a modelling and theoretical point of view. For over 70 years, the approach to learning and teaching "well posedness" of initial value problems (IVPs) for second- and higher-order ordinary differential equations has involved transforming the problem and its analysis to a…

  15. A (201)Hg+ Comagnetometer for (199)Hg+ Trapped Ion Space Atomic Clocks

    NASA Technical Reports Server (NTRS)

    Burt, Eric A.; Taghavi, Shervin; Tjoelker, Robert L.

    2011-01-01

    A method has been developed for unambiguously measuring the exact magnetic field experienced by trapped mercury ions contained within an atomic clock intended for space applications. In general, atomic clocks are insensitive to external perturbations that would change the frequency at which the clocks operate. On a space platform, these perturbative effects can be much larger than they would be on the ground, especially in dealing with the magnetic field environment. The solution is to use a different isotope of mercury held within the same trap as the clock isotope. The magnetic field can be very accurately measured with a magnetic-field-sensitive atomic transition in the added isotope. Further, this measurement can be made simultaneously with normal clock operation, thereby not degrading clock performance. Instead of using a conventional magnetometer to measure ambient fields, which would necessarily be placed some distance away from the clock atoms, first order field-sensitive atomic transition frequency changes in the atoms themselves determine the variations in the magnetic field. As a result, all ambiguity over the exact field value experienced by the atoms is removed. Atoms used in atomic clocks always have an atomic transition (often referred to as the clock transition) that is sensitive to magnetic fields only in second order, and usually have one or more transitions that are first-order field sensitive. For operating parameters used in the (199)Hg(+) clock, the latter can be five orders of magnitude or more sensitive to field fluctuations than the clock transition, thereby providing an unambiguous probe of the magnetic field strength.

  16. Measurements of the CMB Polarization with POLARBEAR and the Optical Performance of the Simons Array

    NASA Astrophysics Data System (ADS)

    Takayuki Matsuda, Frederick; POLARBEAR Collaboration

    2017-06-01

    POLARBEAR is a ground-based polarization sensitive Cosmic Microwave Background (CMB) experiment installed on the 2.5 m aperture Gregorian-Dragone type Huan Tran Telescope located in the Atacama desert in Chile. POLARBEAR is designed to conduct broad surveys at 150 GHz to measure the CMB B-mode polarization signal from inflationary gravitational waves at large angular scales and from gravitational lensing at small angular scales. POLARBEAR started observations in 2012. First season results on gravitational lensing B-mode measurements were published in 2014, and the data analysis of further seasons is in progress. In order to further increase measurement sensitivity, in 2018 the experiment will be upgraded to the Simons Array comprising of three telescopes, each with improved receiver optics using alumina lenses. In order to further expand the observational range, the second and third receiver optics designs were further modified for improved optical performance across the frequencies of 95, 150, 220, and 280 GHz. The diffraction limited field of view was increased especially for the higher frequencies to span a full 4.5 degrees diameter field of view of the telescope. The Simons Array will have a total of 22,764 detectors within this field of view. The Simons Array is projected to put strong constraints on both the measurements of the tensor-to-scalar ratio for inflationary cosmology and the sum of the neutrino masses. I will report on the status of current observations and analysis of the first two observation seasons of POLARBEAR as well as the optics design development of the Simons Array receivers.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  18. Frequency-following and connectivity of different visual areas in response to contrast-reversal stimulation.

    PubMed

    Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J

    2006-01-01

    The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.

  19. Robust controller designs for second-order dynamic system: A virtual passive approach

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1990-01-01

    A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.

  20. Impact of second-order piezoelectricity on electronic and optical properties of c-plane InxGa1-xN quantum dots: Consequences for long wavelength emitters

    NASA Astrophysics Data System (ADS)

    Patra, Saroj Kanta; Schulz, Stefan

    2017-09-01

    In this work, we present a detailed analysis of the second-order piezoelectric effect in c-plane InxGa1-xN/GaN quantum dots and its consequences for electronic and optical properties of these systems. Special attention is paid to the impact of increasing In content x on the results. We find that in general the second-order piezoelectric effect leads to an increase in the electrostatic built-in field. Furthermore, our results show that for an In content ≥30%, this increase in the built-in field has a significant effect on the emission wavelength and the radiative lifetimes. For instance, at 40% In, the radiative lifetime is more than doubled when taking second-order piezoelectricity into account. Overall, our calculations reveal that when designing and describing the electronic and optical properties of c-plane InxGa1-xN/GaN quantum dot based light emitters with high In contents, second-order piezoelectric effects cannot be neglected.

  1. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    PubMed

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Optical solitons, explicit solutions and modulation instability analysis with second-order spatio-temporal dispersion

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Isa Aliyu, Aliyu; Yusuf, Abdullahi; Baleanu, Dumitru

    2017-12-01

    This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the nonlinear Schrödinger equation (NLSE) with group velocity dispersion coefficient and second-order spatio-temporal dispersion coefficient, which arises in photonics and waveguide optics and in optical fibers. The integration algorithm is the sine-Gordon equation method (SGEM). Furthermore, the explicit solutions of the equation are derived by considering the power series solutions (PSS) theory and the convergence of the solutions is guaranteed. Lastly, the modulation instability analysis (MI) is studied based on the standard linear-stability analysis and the MI gain spectrum is obtained.

  3. Relationships between Social Cognition and Sibling Constellations.

    ERIC Educational Resources Information Center

    Goebel, Barbara L.

    1985-01-01

    First and second born college students (N=178) responded to measures of four social cognition factors. Multivariate analysis of variance identified relationships of social cognition factors with five sibling constellation components: subject's sex, subject's birth order (first or second), adjacent first or second born sibling's sex, spacing…

  4. RELAP-7 Development Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Zhao, Haihua; Gleicher, Frederick Nathan

    RELAP-7 is a nuclear systems safety analysis code being developed at the Idaho National Laboratory, and is the next generation tool in the RELAP reactor safety/systems analysis application series. RELAP-7 development began in 2011 to support the Risk Informed Safety Margins Characterization (RISMC) Pathway of the Light Water Reactor Sustainability (LWRS) program. The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical methods, and physical models in order to provide capabilities needed for the RISMC methodology and to support nuclear power safety analysis. The code is beingmore » developed based on Idaho National Laboratory’s modern scientific software development framework – MOOSE (the Multi-Physics Object-Oriented Simulation Environment). The initial development goal of the RELAP-7 approach focused primarily on the development of an implicit algorithm capable of strong (nonlinear) coupling of the dependent hydrodynamic variables contained in the 1-D/2-D flow models with the various 0-D system reactor components that compose various boiling water reactor (BWR) and pressurized water reactor nuclear power plants (NPPs). During Fiscal Year (FY) 2015, the RELAP-7 code has been further improved with expanded capability to support boiling water reactor (BWR) and pressurized water reactor NPPs analysis. The accumulator model has been developed. The code has also been coupled with other MOOSE-based applications such as neutronics code RattleSnake and fuel performance code BISON to perform multiphysics analysis. A major design requirement for the implicit algorithm in RELAP-7 is that it is capable of second-order discretization accuracy in both space and time, which eliminates the traditional first-order approximation errors. The second-order temporal is achieved by a second-order backward temporal difference, and the one-dimensional second-order accurate spatial discretization is achieved with the Galerkin approximation of Lagrange finite elements. During FY-2015, we have done numerical verification work to verify that the RELAP-7 code indeed achieves 2nd-order accuracy in both time and space for single phase models at the system level.« less

  5. Michelson interferometer with diffractively-coupled arm resonators in second-order Littrow configuration.

    PubMed

    Britzger, Michael; Wimmer, Maximilian H; Khalaidovski, Alexander; Friedrich, Daniel; Kroker, Stefanie; Brückner, Frank; Kley, Ernst-Bernhard; Tünnermann, Andreas; Danzmann, Karsten; Schnabel, Roman

    2012-11-05

    Michelson-type laser-interferometric gravitational-wave (GW) observatories employ very high light powers as well as transmissively-coupled Fabry-Perot arm resonators in order to realize high measurement sensitivities. Due to the absorption in the transmissive optics, high powers lead to thermal lensing and hence to thermal distortions of the laser beam profile, which sets a limit on the maximal light power employable in GW observatories. Here, we propose and realize a Michelson-type laser interferometer with arm resonators whose coupling components are all-reflective second-order Littrow gratings. In principle such gratings allow high finesse values of the resonators but avoid bulk transmission of the laser light and thus the corresponding thermal beam distortion. The gratings used have three diffraction orders, which leads to the creation of a second signal port. We theoretically analyze the signal response of the proposed topology and show that it is equivalent to a conventional Michelson-type interferometer. In our proof-of-principle experiment we generated phase-modulation signals inside the arm resonators and detected them simultaneously at the two signal ports. The sum signal was shown to be equivalent to a single-output-port Michelson interferometer with transmissively-coupled arm cavities, taking into account optical loss. The proposed and demonstrated topology is a possible approach for future all-reflective GW observatory designs.

  6. A perceptual space of local image statistics.

    PubMed

    Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M

    2015-12-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A perceptual space of local image statistics

    PubMed Central

    Victor, Jonathan D.; Thengone, Daniel J.; Rizvi, Syed M.; Conte, Mary M.

    2015-01-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice – a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14 min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4 min. In sum, local image statistics forms a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. PMID:26130606

  8. Characterizing performance of ultra-sensitive accelerometers

    NASA Technical Reports Server (NTRS)

    Sebesta, Henry

    1990-01-01

    An overview is given of methodology and test results pertaining to the characterization of ultra sensitive accelerometers. Two issues are of primary concern. The terminology ultra sensitive accelerometer is used to imply instruments whose noise floors and resolution are at the state of the art. Hence, the typical approach of verifying an instrument's performance by measuring it with a yet higher quality instrument (or standard) is not practical. Secondly, it is difficult to find or create an environment with sufficiently low background acceleration. The typical laboratory acceleration levels will be at several orders of magnitude above the noise floor of the most sensitive accelerometers. Furthermore, this background must be treated as unknown since the best instrument available is the one to be tested. A test methodology was developed in which two or more like instruments are subjected to the same but unknown background acceleration. Appropriately selected spectral analysis techniques were used to separate the sensors' output spectra into coherent components and incoherent components. The coherent part corresponds to the background acceleration being measured by the sensors being tested. The incoherent part is attributed to sensor noise and data acquisition and processing noise. The method works well for estimating noise floors that are 40 to 50 dB below the motion applied to the test accelerometers. The accelerometers being tested are intended for use as feedback sensors in a system to actively stabilize an inertial guidance component test platform.

  9. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  10. Non-robust numerical simulations of analogue extension experiments

    NASA Astrophysics Data System (ADS)

    Naliboff, John; Buiter, Susanne

    2016-04-01

    Numerical and analogue models of lithospheric deformation provide significant insight into the tectonic processes that lead to specific structural and geophysical observations. As these two types of models contain distinct assumptions and tradeoffs, investigations drawing conclusions from both can reveal robust links between first-order processes and observations. Recent studies have focused on detailed comparisons between numerical and analogue experiments in both compressional and extensional tectonics, sometimes involving multiple lithospheric deformation codes and analogue setups. While such comparisons often show good agreement on first-order deformation styles, results frequently diverge on second-order structures, such as shear zone dip angles or spacing, and in certain cases even on first-order structures. Here, we present finite-element experiments that are designed to directly reproduce analogue "sandbox" extension experiments at the cm-scale. We use material properties and boundary conditions that are directly taken from analogue experiments and use a Drucker-Prager failure model to simulate shear zone formation in sand. We find that our numerical experiments are highly sensitive to numerous numerical parameters. For example, changes to the numerical resolution, velocity convergence parameters and elemental viscosity averaging commonly produce significant changes in first- and second-order structures accommodating deformation. The sensitivity of the numerical simulations to small parameter changes likely reflects a number of factors, including, but not limited to, high angles of internal friction assigned to sand, complex, unknown interactions between the brittle sand (used as an upper crust equivalent) and viscous silicone (lower crust), highly non-linear strain weakening processes and poor constraints on the cohesion of sand. Our numerical-analogue comparison is hampered by (a) an incomplete knowledge of the fine details of sand failure and sand properties, and (b) likely limitations to the use of a continuum Drucker-Prager model for representing shear zone formation in sand. In some cases our numerical experiments provide reasonable fits to first-order structures observed in the analogue experiments, but the numerical sensitivity to small parameter variations leads us to conclude that the numerical experiments are not robust.

  11. Stability analysis for acoustic wave propagation in tilted TI media by finite differences

    NASA Astrophysics Data System (ADS)

    Bakker, Peter M.; Duveneck, Eric

    2011-05-01

    Several papers in recent years have reported instabilities in P-wave modelling, based on an acoustic approximation, for inhomogeneous transversely isotropic media with tilted symmetry axis (TTI media). In particular, instabilities tend to occur if the axis of symmetry varies rapidly in combination with strong contrasts of medium parameters, which is typically the case at the foot of a steeply dipping salt flank. In a recent paper, we have proposed and demonstrated a P-wave modelling approach for TTI media, based on rotated stress and strain tensors, in which the wave equations reduce to a coupled set of two second-order partial differential equations for two scalar stress components: a normal component along the variable axis of symmetry and a lateral component of stress in the plane perpendicular to that axis. Spatially constant density is assumed in this approach. A numerical discretization scheme was proposed which uses discrete second-derivative operators for the non-mixed second-order derivatives in the wave equations, and combined first-derivative operators for the mixed second-order derivatives. This paper provides a complete and rigorous stability analysis, assuming a uniformly sampled grid. Although the spatial discretization operator for the TTI acoustic wave equation is not self-adjoint, this operator still defines a complete basis of eigenfunctions of the solution space, provided that the solution space is somewhat restricted at locations where the medium is elliptically anisotropic. First, a stability analysis is given for a discretization scheme, which is purely based on first-derivative operators. It is shown that the coefficients of the central difference operators should satisfy certain conditions. In view of numerical artefacts, such a discretization scheme is not attractive, and the non-mixed second-order derivatives of the wave equation are discretized directly by second-derivative operators. It is shown that this modification preserves stability, provided that the central difference operators of the second-order derivatives dominate over the twice applied operators of the first-order derivatives. In practice, it turns out that this is almost the case. Stability of the desired discretization scheme is enforced by slightly weighting down the mixed second-order derivatives in the wave equation. This has a minor, practically negligible, effect on the kinematics of wave propagation. Finally, it is shown that non-reflecting boundary conditions, enforced by applying a taper at the boundaries of the grid, do not harm the stability of the discretization scheme.

  12. Second-order processing of four-stroke apparent motion.

    PubMed

    Mather, G; Murdoch, L

    1999-05-01

    In four-stroke apparent motion displays, pattern elements oscillate between two adjacent positions and synchronously reverse in contrast, but appear to move unidirectionally. For example, if rightward shifts preserve contrast but leftward shifts reverse contrast, consistent rightward motion is seen. In conventional first-order displays, elements reverse in luminance contrast (e.g. light elements become dark, and vice-versa). The resulting perception can be explained by responses in elementary motion detectors turned to spatio-temporal orientation. Second-order motion displays contain texture-defined elements, and there is some evidence that they excite second-order motion detectors that extract spatio-temporal orientation following the application of a non-linear 'texture-grabbing' transform by the visual system. We generated a variety of second-order four-stroke displays, containing texture-contrast reversals instead of luminance contrast reversals, and used their effectiveness as a diagnostic test for the presence of various forms of non-linear transform in the second-order motion system. Displays containing only forward or only reversed phi motion sequences were also tested. Displays defined by variation in luminance, contrast, orientation, and size were effective. Displays defined by variation in motion, dynamism, and stereo were partially or wholly ineffective. Results obtained with contrast-reversing and four-stroke displays indicate that only relatively simple non-linear transforms (involving spatial filtering and rectification) are available during second-order energy-based motion analysis.

  13. Research on fiber-optic cantilever-enhanced photoacoustic spectroscopy for trace gas detection

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Zhou, Xinlei; Gong, Zhenfeng; Yu, Shaochen; Qu, Chao; Guo, Min; Yu, Qingxu

    2018-01-01

    We demonstrate a new scheme of cantilever-enhanced photoacoustic spectroscopy, combining a sensitivity-improved fiber-optic cantilever acoustic sensor with a tunable high-power fiber laser, for trace gas detection. The Fabry-Perot interferometer based cantilever acoustic sensor has advantages such as high sensitivity, small size, easy to install and immune to electromagnetic. Tunable erbium-doped fiber ring laser with an erbium-doped fiber amplifier is used as the light source for acoustic excitation. In order to improve the sensitivity for photoacoustic signal detection, a first-order longitudinal resonant photoacoustic cell with the resonant frequency of 1624 Hz and a large size cantilever with the first resonant frequency of 1687 Hz are designed. The size of the cantilever is 2.1 mm×1 mm, and the thickness is 10 μm. With the wavelength modulation spectrum and second-harmonic detection methods, trace ammonia (NH3) has been measured. The gas detection limits (signal-to-noise ratio = 1) near the wavelength of 1522.5 nm is achieved to be 3 ppb.

  14. Sampling factors influencing accuracy of sperm kinematic analysis.

    PubMed

    Owen, D H; Katz, D F

    1993-01-01

    Sampling conditions that influence the accuracy of experimental measurement of sperm head kinematics were studied by computer simulation methods. Several archetypal sperm trajectories were studied. First, mathematical models of typical flagellar beats were input to hydrodynamic equations of sperm motion. The instantaneous swimming velocities of such sperm were computed over sequences of flagellar beat cycles, from which the resulting trajectories were determined. In a second, idealized approach, direct mathematical models of trajectories were utilized, based upon similarities to the previous hydrodynamic constructs. In general, it was found that analyses of sampling factors produced similar results for the hydrodynamic and idealized trajectories. A number of experimental sampling factors were studied, including the number of sperm head positions measured per flagellar beat, and the time interval over which these measurements are taken. It was found that when one flagellar beat is sampled, values of amplitude of lateral head displacement (ALH) and linearity (LIN) approached their actual values when five or more sample points per beat were taken. Mean angular displacement (MAD) values, however, remained sensitive to sampling rate even when large sampling rates were used. Values of MAD were also much more sensitive to the initial starting point of the sampling procedure than were ALH or LIN. On the basis of these analyses of measurement accuracy for individual sperm, simulations were then performed of cumulative effects when studying entire populations of motile cells. It was found that substantial (double digit) errors occurred in the mean values of curvilinear velocity (VCL), LIN, and MAD under the conditions of 30 video frames per second and 0.5 seconds of analysis time. Increasing the analysis interval to 1 second did not appreciably improve the results. However, increasing the analysis rate to 60 frames per second significantly reduced the errors. These findings thus suggest that computer-aided sperm analysis (CASA) application at 60 frames per second will significantly improve the accuracy of kinematic analysis in most applications to human and other mammalian sperm.

  15. Coherence Motion Perception in Developmental Dyslexia: A Meta-Analysis of Behavioral Studies

    ERIC Educational Resources Information Center

    Benassi, Mariagrazia; Simonelli, Letizia; Giovagnoli, Sara; Bolzani, Roberto

    2010-01-01

    The magnitude of the association between developmental dyslexia (DD) and motion sensitivity is evaluated in 35 studies, which investigated coherence motion perception in DD. A first analysis is conducted on the differences between DD groups and age-matched control (C) groups. In a second analysis, the relationship between motion coherence…

  16. Advanced Applications of Adifor 3.0 for Efficient Calculation of First-and Second-Order CFD Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III

    2004-01-01

    This final report will document the accomplishments of the work of this project. 1) The incremental-iterative (II) form of the reverse-mode (adjoint) method for computing first-order (FO) aerodynamic sensitivity derivatives (SDs) has been successfully implemented and tested in a 2D CFD code (called ANSERS) using the reverse-mode capability of ADIFOR 3.0. These preceding results compared very well with similar SDS computed via a black-box (BB) application of the reverse-mode capability of ADIFOR 3.0, and also with similar SDs calculated via the method of finite differences. 2) Second-order (SO) SDs have been implemented in the 2D ASNWERS code using the very efficient strategy that was originally proposed (but not previously tested) of Reference 3, Appendix A. Furthermore, these SO SOs have been validated for accuracy and computational efficiency. 3) Studies were conducted in Quasi-1D and 2D concerning the smoothness (or lack of smoothness) of the FO and SO SD's for flows with shock waves. The phenomenon is documented in the publications of this study (listed subsequently), however, the specific numerical mechanism which is responsible for this unsmoothness phenomenon was not discovered. 4) The FO and SO derivatives for Quasi-1D and 2D flows were applied to predict aerodynamic design uncertainties, and were also applied in robust design optimization studies.

  17. A cost simulation for mammography examinations taking into account equipment failures and resource utilization characteristics.

    PubMed

    Coelli, Fernando C; Almeida, Renan M V R; Pereira, Wagner C A

    2010-12-01

    This work develops a cost analysis estimation for a mammography clinic, taking into account resource utilization and equipment failure rates. Two standard clinic models were simulated, the first with one mammography equipment, two technicians and one doctor, and the second (based on an actually functioning clinic) with two equipments, three technicians and one doctor. Cost data and model parameters were obtained by direct measurements, literature reviews and other hospital data. A discrete-event simulation model was developed, in order to estimate the unit cost (total costs/number of examinations in a defined period) of mammography examinations at those clinics. The cost analysis considered simulated changes in resource utilization rates and in examination failure probabilities (failures on the image acquisition system). In addition, a sensitivity analysis was performed, taking into account changes in the probabilities of equipment failure types. For the two clinic configurations, the estimated mammography unit costs were, respectively, US$ 41.31 and US$ 53.46 in the absence of examination failures. As the examination failures increased up to 10% of total examinations, unit costs approached US$ 54.53 and US$ 53.95, respectively. The sensitivity analysis showed that type 3 (the most serious) failure increases had a very large impact on the patient attendance, up to the point of actually making attendance unfeasible. Discrete-event simulation allowed for the definition of the more efficient clinic, contingent on the expected prevalence of resource utilization and equipment failures. © 2010 Blackwell Publishing Ltd.

  18. DETERMINING THE INFLUENCE OF LANDSCAPE AND RESEARCH-SPECIFIC HABITAT VARIABLES ON VARIATION OF THERMAL CHARACTERISTICS OF STREAMS WITHIN WESTERN LAKE SUPERIOR WATERSHEDS

    EPA Science Inventory

    As part of a study to develop and test a framework for predicting sensitivity of watersheds to land-use activities, temperatures were monitored in 48 second- and third- order streams on the north and south shores of western Lake Superior. Maximun 21-day average temperatures, whic...

  19. Some improvements in DNA interaction calculations

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Swissler, T. J.; Rein, R.

    1974-01-01

    Calculations are made on specific DNA-type complexes using refined expressions for electrostatic and polarization energies. Dispersion and repulsive terms are included in the evaluation of the total interaction energy. It is shown that the expansion of the electrostatic potential to include multipole moments up to octopole is necessary to achieve convergence of first-order energies. Polarization energies are not as sensitive to this expansion. The calculations also support the usefulness of the hard sphere model for DNA hydrogen bonds and indicate how stacking interactions are influenced by second-order energies.

  20. Detection of ground motions using high-rate GPS time-series

    NASA Astrophysics Data System (ADS)

    Psimoulis, Panos A.; Houlié, Nicolas; Habboub, Mohammed; Michel, Clotaire; Rothacher, Markus

    2018-05-01

    Monitoring surface deformation in real-time help at planning and protecting infrastructures and populations, manage sensitive production (i.e. SEVESO-type) and mitigate long-term consequences of modifications implemented. We present RT-SHAKE, an algorithm developed to detect ground motions associated with landslides, sub-surface collapses, subsidences, earthquakes or rock falls. RT-SHAKE detects first transient changes in individual GPS time series before investigating for spatial correlation(s) of observations made at neighbouring GPS sites and eventually issue a motion warning. In order to assess our algorithm on fast (seconds to minute), large (from 1 cm to meters) and spatially consistent surface motions, we use the 1 Hz GEONET GNSS network data of the Tohoku-Oki MW9.0 2011 as a test scenario. We show the delay of detection of seismic wave arrival by GPS records is of ˜10 seconds with respect to an identical analysis based on strong-motion data and this time delay depends on the level of the time-variable noise. Nevertheless, based on the analysis of the GPS network noise level and ground motion stochastic model, we show that RT-SHAKE can narrow the range of earthquake magnitude, by setting a lower threshold of detected earthquakes to MW6.5-7, if associated with a real-time automatic earthquake location system.

  1. Asymptotic analysis of discrete schemes for non-equilibrium radiation diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Xia, E-mail: cui_xia@iapcm.ac.cn; Yuan, Guang-wei; Shen, Zhi-jun

    Motivated by providing well-behaved fully discrete schemes in practice, this paper extends the asymptotic analysis on time integration methods for non-equilibrium radiation diffusion in [2] to space discretizations. Therein studies were carried out on a two-temperature model with Larsen's flux-limited diffusion operator, both the implicitly balanced (IB) and linearly implicit (LI) methods were shown asymptotic-preserving. In this paper, we focus on asymptotic analysis for space discrete schemes in dimensions one and two. First, in construction of the schemes, in contrast to traditional first-order approximations, asymmetric second-order accurate spatial approximations are devised for flux-limiters on boundary, and discrete schemes with second-ordermore » accuracy on global spatial domain are acquired consequently. Then by employing formal asymptotic analysis, the first-order asymptotic-preserving property for these schemes and furthermore for the fully discrete schemes is shown. Finally, with the help of manufactured solutions, numerical tests are performed, which demonstrate quantitatively the fully discrete schemes with IB time evolution indeed have the accuracy and asymptotic convergence as theory predicts, hence are well qualified for both non-equilibrium and equilibrium radiation diffusion. - Highlights: • Provide AP fully discrete schemes for non-equilibrium radiation diffusion. • Propose second order accurate schemes by asymmetric approach for boundary flux-limiter. • Show first order AP property of spatially and fully discrete schemes with IB evolution. • Devise subtle artificial solutions; verify accuracy and AP property quantitatively. • Ideas can be generalized to 3-dimensional problems and higher order implicit schemes.« less

  2. Linear and non-linear regression analysis for the sorption kinetics of methylene blue onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-10-11

    Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.

  3. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  4. Spectrophotometric Determination of Iron(II) and Cobalt(II) by Direct, Derivative, and Simultaneous Methods Using 2-Hydroxy-1-Naphthaldehyde-p-Hydroxybenzoichydrazone

    PubMed Central

    Devi, V. S. Anusuya; Reddy, V. Krishna

    2012-01-01

    Optimized and validated spectrophotometric methods have been proposed for the determination of iron and cobalt individually and simultaneously. 2-hydroxy-1-naphthaldehyde-p-hydroxybenzoichydrazone (HNAHBH) reacts with iron(II) and cobalt(II) to form reddish-brown and yellow-coloured [Fe(II)-HNAHBH] and [Co(II)-HNAHBH] complexes, respectively. The maximum absorbance of these complexes was found at 405 nm and 425 nm, respectively. For [Fe(II)-HNAHBH], Beer's law is obeyed over the concentration range of 0.055–1.373 μg mL−1 with a detection limit of 0.095 μg mL−1 and molar absorptivity ɛ, 5.6 × 104 L mol−1 cm−1. [Co(II)-HNAHBH] complex obeys Beer's law in 0.118–3.534 μg mL−1 range with a detection limit of 0.04 μg mL−1 and molar absorptivity, ɛ of 2.3 × 104 L mol−1 cm−1. Highly sensitive and selective first-, second- and third-order derivative methods are described for the determination of iron and cobalt. A simultaneous second-order derivative spectrophotometric method is proposed for the determination of these metals. All the proposed methods are successfully employed in the analysis of various biological, water, and alloy samples for the determination of iron and cobalt content. PMID:22505925

  5. Optimal placement of trailing-edge flaps for helicopter vibration reduction using response surface methods

    NASA Astrophysics Data System (ADS)

    Viswamurthy, S. R.; Ganguli, Ranjan

    2007-03-01

    This study aims to determine optimal locations of dual trailing-edge flaps to achieve minimum hub vibration levels in a helicopter, while incurring low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. The reduced hub vibration levels and required flap control power (due to flap motion) are the two objectives considered in this study and the flap locations along the blade are the design variables. It is found that second order polynomial response surfaces based on the central composite design of the theory of design of experiments describe both objectives adequately. Numerical studies for a four-bladed hingeless rotor show that both objectives are more sensitive to outboard flap location compared to the inboard flap location by an order of magnitude. Optimization results show a disjoint Pareto surface between the two objectives. Two interesting design points are obtained. The first design gives 77 percent vibration reduction from baseline conditions (no flap motion) with a 7 percent increase in flap power compared to the initial design. The second design yields 70 percent reduction in hub vibration with a 27 percent reduction in flap power from the initial design.

  6. Approximate analysis for repeated eigenvalue problems with applications to controls-structure integrated design

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Hou, Gene J. W.

    1994-01-01

    A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.

  7. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.

  8. Explaining the "Natural Order of L2 Morpheme Acquisition" in English: A Meta-Analysis of Multiple Determinants

    ERIC Educational Resources Information Center

    Goldschneider, Jennifer M.; DeKeyser, Robert M.

    2005-01-01

    This meta-analysis pools data from 25 years of research on the order of acquisition of English grammatical morphemes by students of English as a second language (ESL). Some researchers have posited a "natural" order of acquisition common to all ESL learners, but no single cause has been shown for this phenomenon. Our study investigated…

  9. Technical needs assessment: UWMC's sensitivity analysis guides decision-making.

    PubMed

    Alotis, Michael

    2003-01-01

    In today's healthcare market, it is critical for provider institutions to offer the latest and best technological services while remaining fiscally sound. In academic practices, like the University of Washington Medical Center (UWMC), there are the added responsibilities of teaching and research that require a high-tech environment to thrive. These conditions and needs require extensive analysis of not only what equipment to buy, but also when and how it should be acquired. In an organization like the UWMC, which has strategically positioned itself for growth, it is useful to build a sensitivity analysis based on the strategic plan. A common forecasting tool, the sensitivity analysis lays out existing and projected business operations with volume assumptions displayed in layers. Each layer of current and projected activity is plotted over time and placed against a background depicting the capacity of the key modality. Key elements of a sensitivity analysis include necessity, economic assessment, performance, compatibility, reliability, service and training. There are two major triggers that cause us to consider the purchase of new imaging equipment and that determine how to evaluate the equipment we buy. One trigger revolves around our ability to serve patients by seeing them on a timely basis. If we find a significant gap between demand and our capacity to meet it, or anticipate a greater increased demand based upon trends, we begin to consider enhancing that capacity. A second trigger is the release of a breakthrough or substantially improved technology that will clearly have a positive impact on clinical efficacy and efficiency, thereby benefiting the patient. Especially in radiology departments, where many technologies require large expenditures, it is no longer acceptable simply to spend on new and improved technologies. It is necessary to justify them as a strong investment in clinical management and efficacy. There is pressure to provide "proof" at the department level and beyond. By applying sensitivity analysis and other forecasting methods, we are able to spend our resources judiciously in order to get the equipment we need when we need it. This helps ensure that we have efficacious, efficient systems--and enough of them--so that our patients are examined on a timely basis and our clinics run smoothly. It also goes a long way toward making certain that the best equipment is available to our clinicians, researchers, students and patients alike.

  10. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  11. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  12. Efficiency of perfectly matched layers for seismic wave modeling in second-order viscoelastic equations

    NASA Astrophysics Data System (ADS)

    Ping, Ping; Zhang, Yu; Xu, Yixian; Chu, Risheng

    2016-12-01

    In order to improve the perfectly matched layer (PML) efficiency in viscoelastic media, we first propose a split multi-axial PML (M-PML) and an unsplit convolutional PML (C-PML) in the second-order viscoelastic wave equations with the displacement as the only unknown. The advantage of these formulations is that it is easy and efficient to revise the existing codes of the second-order spectral element method (SEM) or finite-element method (FEM) with absorbing boundaries in a uniform equation, as well as more economical than the auxiliary differential equations PML. Three models which are easily suffered from late time instabilities are considered to validate our approaches. Through comparison the M-PML with C-PML efficiency of absorption and stability for long time simulation, it can be concluded that: (1) for an isotropic viscoelastic medium with high Poisson's ratio, the C-PML will be a sufficient choice for long time simulation because of its weak reflections and superior stability; (2) unlike the M-PML with high-order damping profile, the M-PML with second-order damping profile loses its stability in long time simulation for an isotropic viscoelastic medium; (3) in an anisotropic viscoelastic medium, the C-PML suffers from instabilities, while the M-PML with second-order damping profile can be a better choice for its superior stability and more acceptable weak reflections than the M-PML with high-order damping profile. The comparative analysis of the developed methods offers meaningful significance for long time seismic wave modeling in second-order viscoelastic wave equations.

  13. Computations of the chirality-sensitive effect induced by an antisymmetric indirect spin–spin coupling

    NASA Astrophysics Data System (ADS)

    Garbacz, Piotr

    2018-05-01

    Results of quantum mechanical computations of the antisymmetric part of the indirect spin-spin coupling tensor, ?, performed using the coupled-cluster method, the second-order polarisation propagator approximation, and the density functional theory for 25 molecules and nearly 100 spin-spin couplings are reported. These results are used for an estimation of the magnitude of the recently proposed liquid-state nuclear magnetic resonance chirality-sensitive effect, which allows to determine the molecular chirality directly, i.e. without the need for the application of any chiral agent. The following were found: (i) the antisymmetry J⋆ is usually larger for the coupling between spins separated by two chemical bonds in comparison with the coupling through one bond, (ii) promising samples are those which contain fluorine, and (iii) the antisymmetry of the spin-spin coupling tensor is of the order of a few hertz for commercially available chemical compounds. Therefore, the relevant property of the experiment, the pseudoscalar Jc, for them is of the order of 1 nHz m/V.

  14. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Maxwell's second- and third-order equations of transfer for non-Maxwellian gases

    NASA Technical Reports Server (NTRS)

    Baganoff, D.

    1992-01-01

    Condensed algebraic forms for Maxwell's second- and third-order equations of transfer are developed for the case of molecules described by either elastic hard spheres, inverse-power potentials, or by Bird's variable hard-sphere model. These hardly reduced, yet exact, equations provide a new point of origin, when using the moment method, in seeking approximate solutions in the kinetic theory of gases for molecular models that are physically more realistic than that provided by the Maxwell model. An important by-product of the analysis when using these second- and third-order relations is that a clear mathematical connection develops between Bird's variable hard-sphere model and that for the inverse-power potential.

  16. Transabdominal ultrasonography as a screening test for second-trimester placenta previa.

    PubMed

    Quant, Hayley S; Friedman, Alexander M; Wang, Eileen; Parry, Samuel; Schwartz, Nadav

    2014-03-01

    To determine the test characteristics of transabdominal ultrasonography as a screening test for second-trimester placenta previa. This secondary analysis of a prospective cohort study evaluated the distance from the placental edge to the internal os (placenta-cervix distance) through both transabdominal and transvaginal ultrasonography during the anatomic survey. Patients were recruited in the Maternal-Fetal Medicine Ultrasound Unit at the Hospital of the University of Pennsylvania, an urban tertiary care center. Transabdominal placenta-cervix distance cutoffs with high sensitivity for detection of previa and low-lying placenta were identified, and test characteristics were calculated. Follow-up ultrasound data, pregnancy, and delivery outcomes for those with second-trimester previa or low-lying placenta were obtained. One thousand two hundred fourteen women were included in the analysis. A transabdominal placenta-cervix distance cutoff of 4.2 cm was 93.3% sensitive and 76.7% specific for detection of previa with a 99.8% negative predictive value at a screen-positive rate of 25.0%. A cutoff of 2.8 cm was 86.7% sensitive and 90.5% specific with a 99.6% negative predictive value at a screen-positive rate of 11.4%. Only 9.8% (four of 41) of previas and low-lying placentas persisted through delivery. Transabdominal ultrasonography is an effective screening test for second-trimester placenta previa. At centers not performing universal transvaginal ultrasonography at the time of the anatomic survey, evidence-based transabdominal placenta-cervix distance cutoffs can optimize the identification of patients who require further surveillance for previa.

  17. Lithium or Valproate Adjunctive Therapy to Second-generation Antipsychotics and Metabolic Variables in Patients With Schizophrenia or Schizoaffective Disorder

    PubMed Central

    VINCENZI, BRENDA; GREENE, CLAIRE M.; ULLOA, MELISSA; PARNAROUSKIS, LINDSEY; JACKSON, JOHN W.; HENDERSON, DAVID C.

    2017-01-01

    Objective People with schizophrenia are at greater risk for cardiovascular disease and their overallmortality rate is elevated compared to the general population. The metabolic side effects of antipsychotic medications have been widely studied; however, the effect of adding conventional mood stabilizers, such as lithium and valproate, to antipsychotic medication has not been assessed in terms of metabolic risk. The primary purpose of this secondary analysis was to examine whether treatment with lithium or valproate in addition to a second-generation antipsychotic is associated with poorer metabolic outcomes than treatment with a second-generation antipsychotic without lithium or depakote. Methods Baseline data from 3 studies, which included measurement of body mass index, waist circumference, fasting glucose, insulin, homeostatic model assessment of insulin resistance, insulin sensitivity index, glucose utilization, and acute insulin response to glucose, were included in the analysis. Results No differences were found between those taking lithium or valproate and those who were not in terms of fasting glucose, fasting insulin, and homeostatic model assessment of insulin resistance. Insulin sensitivity was lower among participants taking lithium or valproate. Participants taking lithium or valproate had a higher body mass index than those not taking conventional mood stabilizers, although the difference did not reach statistical significance. Conclusions These cross-sectional findings suggest it may be beneficial to monitor insulin sensitivity and body mass index in patients taking lithium or valproate in combination with a second-generation antipsychotic. PMID:27123797

  18. A dual-input nonlinear system analysis of autonomic modulation of heart rate

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Mullen, T. J.; Cohen, R. J.

    1996-01-01

    Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.

  19. Methodology for environmental assessments of oil and hazardous substance spills

    NASA Astrophysics Data System (ADS)

    Davis, W. P.; Scott, G. I.; Getter, C. D.; Hayes, M. O.; Gundlach, E. R.

    1980-03-01

    Scientific assessment of the complex environmental consequences of large spills of oil or other hazardous substances has stimulated development of improved strategies for rapid and valid collection and processing of ecological data. The combination of coastal processes and geological measurements developed by Hayes & Gundlach (1978), together with selected field biological and chemical observations/measurements, provide an ecosystem impact assessment approach which is termed “integrated zonal method of ecological impact assessment.” Ecological assessment of oil and hazardous material spills has been divided into three distinct phases: (1) first-order response studies — conducted at the time of the initial spill event, which gather data to document acute impacts and assist decision-makers in prioritization of cleanup efforts and protection of ecologically sensitive habitats, (2) second-order response studies — conducted two months to one year post-spill, which document any delayed mortality and attempt to identify potential sublethal impacts in sensitive species, and (3) third-order response studies — conducted one to three years post-spill, to document chronic impacts (both lethal and sublethal) to specific indicator species. Data collected during first-order response studies are gathered in a quantitative manner so that the initial assessment may become a baseline for later, more detailed, post-spill scientific efforts. First- and second-order response studies of the “Peck Slip” oil spill in Puerto Rico illustrate the usefulness of this method. The need for contingency planning before a spill has been discussed along with the use of the Vulnerability Index, a method in which coastal environments are classified on a scale of 1 10, based upon their potential susceptibility to oiling. A study of the lower Cook Inlet section of the Alaskan coast illustrates the practical application of this method.

  20. Improved Diffuse Fluorescence Flow Cytometer Prototype for High Sensitivity Detection of Rare Circulating Cells In Vivo

    NASA Astrophysics Data System (ADS)

    Pestana, Noah Benjamin

    Accurate quantification of circulating cell populations is important in many areas of pre-clinical and clinical biomedical research, for example, in the study of cancer metastasis or the immune response following tissue and organ transplants. Normally this is done "ex-vivo" by drawing and purifying a small volume of blood and then analyzing it with flow cytometry, hemocytometry or microfludic devices, but the sensitivity of these techniques are poor and the process of handling samples has been shown to affect cell viability and behavior. More recently "in vivo flow cytometry" (IVFC) techniques have been developed where fluorescently-labeled cells flowing in a small blood vessel in the ear or retina are analyzed, but the sensitivity is generally poor due to the small sampling volume. To address this, our group recently developed a method known as "Diffuse Fluorescence Flow Cytometry" (DFFC) that allows detection and counting of rare circulating cells with diffuse photons, offering extremely high single cell counting sensitivity. In this thesis, an improved DFFC prototype was designed and validated. The chief improvements were three-fold, i) improved optical collection efficiency, ii) improved detection electronics, and iii) development of a method to mitigate motion artifacts during in vivo measurements. In combination, these improvements yielded an overall instrument detection sensitivity better than 1 cell/mL in vivo, which is the most sensitive IVFC system reported to date. Second, development and validation of a low-cost microfluidic device reader for analysis of ocular fluids is described. We demonstrate that this device has equivalent or better sensitivity and accuracy compared a fluorescence microscope, but at an order-of-magnitude reduced cost with simplified operation. Future improvements to both instruments are also discussed.

  1. A critical analysis of some popular methods for the discretisation of the gradient operator in finite volume methods

    NASA Astrophysics Data System (ADS)

    Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John

    2017-12-01

    Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.

  2. Small Launch Vehicle Concept Development for Affordable Multi-Stage Inline Configurations

    NASA Technical Reports Server (NTRS)

    Beers, Benjamin R.; Waters, Eric D.; Philips, Alan D.; Threet, Grady E., Jr.

    2014-01-01

    The Advanced Concepts Office at NASA's George C. Marshall Space Flight Center conducted a study of two configurations of a three stage, inline, liquid propellant small launch vehicle concept developed on the premise of maximizing affordability by targeting a specific payload capability range based on current industry demand. The initial configuration, NESC-1, employed liquid oxygen as the oxidizer and rocket propellant grade kerosene as the fuel in all three stages. The second and more heavily studied configuration, NESC-4, employed liquid oxygen and rocket propellant grade kerosene on the first and second stages and liquid oxygen and liquid methane fuel on the third stage. On both vehicles, sensitivity studies were first conducted on specific impulse and stage propellant mass fraction in order to baseline gear ratios and drive the focus of concept development. Subsequent sensitivity and trade studies on the NESC-4 configuration investigated potential impacts to affordability due to changes in gross liftoff weight and/or vehicle complexity. Results are discussed at a high level to understand the severity of certain sensitivities and how those trade studies conducted can either affect cost, performance or both.

  3. Small Launch Vehicle Concept Development for Affordable Multi-Stage Inline Configurations

    NASA Technical Reports Server (NTRS)

    Beers, Benjamin R.; Waters, Eric D.; Philips, Alan D.; Threet, Grady E. Jr.

    2013-01-01

    The Advanced Concepts Office at NASA's George C. Marshall Space Flight Center conducted a study of two configurations of a three-stage, inline, liquid propellant small launch vehicle concept developed on the premise of maximizing affordability by targeting a specific payload capability range based on current industry demand. The initial configuration, NESC-1, employed liquid oxygen as the oxidizer and rocket propellant grade kerosene as the fuel in all three stages. The second and more heavily studied configuration, NESC-4, employed liquid oxygen and RP-1 on the first and second stages and liquid oxygen and liquid methane fuel on the third stage. On both vehicles, sensitivity studies were first conducted on specific impulse and stage propellant mass fraction in order to baseline gear ratios and drive the focus of concept development. Subsequent sensitivity and trade studies on the NESC-4 concept investigated potential impacts to affordability due to changes in gross liftoff weight and/or vehicle complexity. Results are discussed at a high level to understand the impact severity of certain sensitivities and how those trade studies conducted can either affect cost, performance, or both.

  4. MaxEnt, second variation, and generalized statistics

    NASA Astrophysics Data System (ADS)

    Plastino, A.; Rocca, M. C.

    2015-10-01

    There are two kinds of Tsallis-probability distributions: heavy tail ones and compact support distributions. We show here, by appeal to functional analysis' tools, that for lower bound Hamiltonians, the second variation's analysis of the entropic functional guarantees that the heavy tail q-distribution constitutes a maximum of Tsallis' entropy. On the other hand, in the compact support instance, a case by case analysis is necessary in order to tackle the issue.

  5. Phase transitions and thermodynamic properties of antiferromagnetic Ising model with next-nearest-neighbor interactions on the Kagomé lattice

    NASA Astrophysics Data System (ADS)

    Ramazanov, M. K.; Murtazaev, A. K.; Magomedov, M. A.; Badiev, M. K.

    2018-06-01

    We study phase transitions and thermodynamic properties in the two-dimensional antiferromagnetic Ising model with next-nearest-neighbor interaction on a Kagomé lattice by Monte Carlo simulations. A histogram data analysis shows that a second-order transition occurs in the model. From the analysis of obtained data, we can assume that next-nearest-neighbor ferromagnetic interactions in two-dimensional antiferromagnetic Ising model on a Kagomé lattice excite the occurrence of a second-order transition and unusual behavior of thermodynamic properties on the temperature dependence.

  6. Supersonic second order analysis and optimization program user's manual

    NASA Technical Reports Server (NTRS)

    Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance theory was utilized to meet this objective. Numerical codes were developed for analysis and design of relatively general three dimensional geometries. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes. Case computational time of one minute on a CDC 176 are typical for practical aircraft arrangement.

  7. Fiber-enhanced Raman multigas spectroscopy: a versatile tool for environmental gas sensing and breath analysis.

    PubMed

    Hanf, Stefan; Keiner, Robert; Yan, Di; Popp, Jürgen; Frosch, Torsten

    2014-06-03

    Versatile multigas analysis bears high potential for environmental sensing of climate relevant gases and noninvasive early stage diagnosis of disease states in human breath. In this contribution, a fiber-enhanced Raman spectroscopic (FERS) analysis of a suite of climate relevant atmospheric gases is presented, which allowed for reliable quantification of CH4, CO2, and N2O alongside N2 and O2 with just one single measurement. A highly improved analytical sensitivity was achieved, down to a sub-parts per million limit of detection with a high dynamic range of 6 orders of magnitude and within a second measurement time. The high potential of FERS for the detection of disease markers was demonstrated with the analysis of 27 nL of exhaled human breath. The natural isotopes (13)CO2 and (14)N(15)N were quantified at low levels, simultaneously with the major breath components N2, O2, and (12)CO2. The natural abundances of (13)CO2 and (14)N(15)N were experimentally quantified in very good agreement to theoretical values. A fiber adapter assembly and gas filling setup was designed for rapid and automated analysis of multigas compositions and their fluctuations within seconds and without the need for optical readjustment of the sensor arrangement. On the basis of the abilities of such miniaturized FERS system, we expect high potential for the diagnosis of clinically administered (13)C-labeled CO2 in human breath and also foresee high impact for disease detection via biologically vital nitrogen compounds.

  8. SRB combustion dynamics analysis computer program (CDA-1)

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Park, O. Y.

    1988-01-01

    A two-dimensional numerical model is developed for the unsteady oscillatory combustion of the solid propellant flame zone. Variations of pressure with low and high frequency responses across the long flame, such as in the double-base propellants, are accommodated. The formulation is based on a premixed, laminar flame with a one-step overall chemical reaction and the Arrhenius law of decomposition for the gaseous phase with no condensed phase reaction. Numerical calculations are carried out using the Galerkin finite elements, with perturbations expanded to the zeroth, first, and second orders. The numerical results indicate that amplification of oscillatory motions does indeed prevail in high frequency regions. For the second order system, the trend is similar to the first order system for low frequencies, but instabilities may appear at frequencies lower than those of the first order system. The most significant effect of the second order system is that the admittance is extremely oscillatory between moderately high frequency ranges.

  9. Second-moment budgets in cloud topped boundary layers: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Heinze, Rieke; Mironov, Dmitrii; Raasch, Siegfried

    2015-06-01

    A detailed analysis of second-order moment budgets for cloud topped boundary layers (CTBLs) is performed using high-resolution large-eddy simulation (LES). Two CTBLs are simulated—one with trade wind shallow cumuli, and the other with nocturnal marine stratocumuli. Approximations to the ensemble-mean budgets of the Reynolds-stress components, of the fluxes of two quasi-conservative scalars, and of the scalar variances and covariance are computed by averaging the LES data over horizontal planes and over several hundred time steps. Importantly, the subgrid scale contributions to the budget terms are accounted for. Analysis of the LES-based second-moment budgets reveals, among other things, a paramount importance of the pressure scrambling terms in the Reynolds-stress and scalar-flux budgets. The pressure-strain correlation tends to evenly redistribute kinetic energy between the components, leading to the growth of horizontal-velocity variances at the expense of the vertical-velocity variance which is produced by buoyancy over most of both CTBLs. The pressure gradient-scalar covariances are the major sink terms in the budgets of scalar fluxes. The third-order transport proves to be of secondary importance in the scalar-flux budgets. However, it plays a key role in maintaining budgets of TKE and of the scalar variances and covariance. Results from the second-moment budget analysis suggest that the accuracy of description of the CTBL structure within the second-order closure framework strongly depends on the fidelity of parameterizations of the pressure scrambling terms in the flux budgets and of the third-order transport terms in the variance budgets. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  10. Diagonal and off-diagonal susceptibilities of conserved quantities in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Chatterjee, Arghya; Chatterjee, Sandeep; Nayak, Tapan K.; Ranjan Sahoo, Nihar

    2016-12-01

    Susceptibilities of conserved quantities, such as baryon number, strangeness and electric charge are sensitive to the onset of quantum chromodynamics phase transition, and are expected to provide information on the matter produced in heavy-ion collision experiments. A comprehensive study of the second order diagonal susceptibilities and cross correlations has been made within a thermal model approach of the hadron resonance gas model as well as with a hadronic transport model, ultra-relativistic quantum molecular dynamics. We perform a detailed analysis of the effect of detector acceptances and choice of particle species in the experimental measurements of the susceptibilities for heavy-ion collisions corresponding to \\sqrt{{s}{NN}} = 4 GeV to 200 GeV. The transverse momentum cutoff dependence of suitably normalised susceptibilities are proposed as useful observables to probe the properties of the medium at freezeout.

  11. Using the MCNP Taylor series perturbation feature (efficiently) for shielding problems

    NASA Astrophysics Data System (ADS)

    Favorite, Jeffrey

    2017-09-01

    The Taylor series or differential operator perturbation method, implemented in MCNP and invoked using the PERT card, can be used for efficient parameter studies in shielding problems. This paper shows how only two PERT cards are needed to generate an entire parameter study, including statistical uncertainty estimates (an additional three PERT cards can be used to give exact statistical uncertainties). One realistic example problem involves a detailed helium-3 neutron detector model and its efficiency as a function of the density of its high-density polyethylene moderator. The MCNP differential operator perturbation capability is extremely accurate for this problem. A second problem involves the density of the polyethylene reflector of the BeRP ball and is an example of first-order sensitivity analysis using the PERT capability. A third problem is an analytic verification of the PERT capability.

  12. Measurement of the quadratic hyperpolarizability of the collagen triple helix and application to second harmonic imaging of natural and biomimetic collagenous tissues

    NASA Astrophysics Data System (ADS)

    Deniset-Besseau, A.; Strupler, M.; Duboisset, J.; De Sa Peixoto, P.; Benichou, E.; Fligny, C.; Tharaux, P.-L.; Mosser, G.; Brevet, P.-F.; Schanne-Klein, M.-C.

    2009-09-01

    Collagen is a major protein of the extracellular matrix that is characterized by triple helical domains. It plays a central role in the formation of fibrillar and microfibrillar networks, basement membranes, as well as other structures of the connective tissue. Remarkably, fibrillar collagen exhibits efficient Second Harmonic Generation (SHG) so that SHG microscopy proved to be a sensitive tool to probe the three-dimensional architecture of fibrillar collagen and to assess the progression of fibrotic pathologies. We obtained sensitive and reproducible measurements of the fibrosis extent, but we needed quantitative data at the molecular level to further process SHG images. We therefore performed Hyper- Rayleigh Scattering (HRS) experiments and measured a second order hyperpolarisability of 1.25 10-27 esu for rat-tail type I collagen. This value is surprisingly large considering that collagen presents no strong harmonophore in its aminoacid sequence. In order to get insight into the physical origin of this nonlinear process, we performed HRS measurements after denaturation of the collagen triple helix and for a collagen-like short model peptide [(Pro-Pro- Gly)10]3. It showed that the collagen large nonlinear response originates in the tight alignment of a large number of weakly efficient harmonophores, presumably the peptide bonds, resulting in a coherent amplification of the nonlinear signal along the triple helix. To illustrate this mechanism, we successfully recorded SHG images in collagenous biomimetic matrices.

  13. Cost-Effectiveness Analysis of Second-Line Chemotherapy Agents for Advanced Gastric Cancer.

    PubMed

    Lam, Simon W; Wai, Maya; Lau, Jessica E; McNamara, Michael; Earl, Marc; Udeh, Belinda

    2017-01-01

    Gastric cancer is the fifth most common malignancy and second leading cause of cancer-related mortality. Chemotherapy options for patients who fail first-line treatment are limited. Thus the objective of this study was to assess the cost-effectiveness of second-line treatment options for patients with advanced or metastatic gastric cancer. Cost-effectiveness analysis using a Markov model to compare the cost-effectiveness of six possible second-line treatment options for patients with advanced gastric cancer who have failed previous chemotherapy: irinotecan, docetaxel, paclitaxel, ramucirumab, paclitaxel plus ramucirumab, and palliative care. The model was performed from a third-party payer's perspective to compare lifetime costs and health benefits associated with studied second-line therapies. Costs included only relevant direct medical costs. The model assumed chemotherapy cycle lengths of 30 days and a maximum number of 24 cycles. Systematic review of literature was performed to identify clinical data sources and utility and cost data. Quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs) were calculated. The primary outcome measure for this analysis was the ICER between different therapies, where the incremental cost was divided by the number of QALYs saved. The ICER was compared with a willingness-to-pay (WTP) threshold that was set at $50,000/QALY gained, and an exploratory analysis using $160,000/QALY gained was also used. The model's robustness was tested by using 1-way sensitivity analyses and a 10,000 Monte Carlo simulation probabilistic sensitivity analysis (PSA). Irinotecan had the lowest lifetime cost and was associated with a QALY gain of 0.35 year. Docetaxel, ramucirumab alone, and palliative care were dominated strategies. Paclitaxel and the combination of paclitaxel plus ramucirumab led to higher QALYs gained, at an incremental cost of $86,815 and $1,056,125 per QALY gained, respectively. Based on our prespecified WTP threshold, our base case analysis demonstrated that irinotecan alone is the most cost-effective regimen, and both paclitaxel alone and the combination of paclitaxel and ramucirumab were not cost-effective (ICER more than $50,000). Both 1-way sensitivity analyses and PSA demonstrated the model's robustness. PSA illustrated that paclitaxel plus ramucirumab was extremely unlikely to be cost-effective at a WTP threshold less than $400,000/QALY gained. Irinotecan alone appears to be the most cost-effective second-line regimen for patients with gastric cancer. Paclitaxel may be cost-effective if the WTP threshold was set at $160,000/QALY gained. © 2016 Pharmacotherapy Publications, Inc.

  14. Measurement of the n-p elastic scattering angular distribution at E{sub n}=14.9 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boukharouba, N.; Bateman, F. B.; Carlson, A. D.

    2010-07-15

    The relative differential cross section for the elastic scattering of neutrons by protons was measured at an incident neutron energy E{sub n}=14.9 MeV and for center-of-mass scattering angles ranging from about 60 deg. to 180 deg. Angular distribution values were obtained from the normalization of the integrated data to the n-p total elastic scattering cross section. Comparisons of the normalized data to the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and with the ENDF/B-VII.0 evaluation are sensitive to the value of the total elastic scattering cross section used to normalize the data. The resultsmore » of a fit to a first-order Legendre polynomial expansion are in good agreement in the backward scattering hemisphere with the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and to a lesser extent, with the ENDF/B-VII.0 evaluation. A fit to a second-order expansion is in better agreement with the ENDF/B-VII.0 evaluation than with the other predictions, in particular when the total elastic scattering cross section given by Arndt et al. and the Nijmegen group is used to normalize the data. A Legendre polynomial fit to the existing n-p scattering data in the 14 MeV energy region, excluding the present measurement, showed that a best fit is obtained for a second-order expansion. Furthermore, the Kolmogorov-Smirnov test confirms the general agreement in the backward scattering hemisphere and shows that significant differences between the database and the predictions occur in the angular range between 60 deg. and 120 deg. and below 20 deg. Although there is good overall agreement in the backward scattering hemisphere, more precision small-angle scattering data and a better definition of the total elastic cross section are needed for an accurate determination of the shape and magnitude of the angular distribution.« less

  15. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  16. Analysis and design of a second-order digital phase-locked loop

    NASA Technical Reports Server (NTRS)

    Blasche, P. R.

    1979-01-01

    A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.

  17. Testing charm quark equilibration in ultrahigh-energy heavy ion collisions with fluctuations

    NASA Astrophysics Data System (ADS)

    Graf, Thorben; Steinheimer, Jan; Bleicher, Marcus; Herold, Christoph

    2018-03-01

    Recent lattice QCD data on higher order susceptibilities of charm quarks provide the opportunity to explore charm quark equilibration in the early quark gluon plasma (QGP) phase. Here, we propose to use the lattice data on second- and fourth-order net charm susceptibilities to infer the charm quark equilibration temperature and the corresponding volume, in the early QGP stage, via a combined analysis of experimentally measured multiplicity fluctuations. Furthermore, the first perturbative results for the second- and fourth-order charm quark susceptibilities and their ratio are presented.

  18. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  19. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  20. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  1. Cortical geometry as a determinant of brain activity eigenmodes: Neural field analysis

    NASA Astrophysics Data System (ADS)

    Gabay, Natasha C.; Robinson, P. A.

    2017-09-01

    Perturbation analysis of neural field theory is used to derive eigenmodes of neural activity on a cortical hemisphere, which have previously been calculated numerically and found to be close analogs of spherical harmonics, despite heavy cortical folding. The present perturbation method treats cortical folding as a first-order perturbation from a spherical geometry. The first nine spatial eigenmodes on a population-averaged cortical hemisphere are derived and compared with previous numerical solutions. These eigenmodes contribute most to brain activity patterns such as those seen in electroencephalography and functional magnetic resonance imaging. The eigenvalues of these eigenmodes are found to agree with the previous numerical solutions to within their uncertainties. Also in agreement with the previous numerics, all eigenmodes are found to closely resemble spherical harmonics. The first seven eigenmodes exhibit a one-to-one correspondence with their numerical counterparts, with overlaps that are close to unity. The next two eigenmodes overlap the corresponding pair of numerical eigenmodes, having been rotated within the subspace spanned by that pair, likely due to second-order effects. The spatial orientations of the eigenmodes are found to be fixed by gross cortical shape rather than finer-scale cortical properties, which is consistent with the observed intersubject consistency of functional connectivity patterns. However, the eigenvalues depend more sensitively on finer-scale cortical structure, implying that the eigenfrequencies and consequent dynamical properties of functional connectivity depend more strongly on details of individual cortical folding. Overall, these results imply that well-established tools from perturbation theory and spherical harmonic analysis can be used to calculate the main properties and dynamics of low-order brain eigenmodes.

  2. Electron spin resonance as a high sensitivity technique for environmental magnetism: determination of contamination in carbonate sediments

    NASA Astrophysics Data System (ADS)

    Crook, Nigel P.; Hoon, Stephen R.; Taylor, Kevin G.; Perry, Chris T.

    2002-05-01

    This study investigates the application of high sensitivity electron spin resonance (ESR) to environmental magnetism in conjunction with the more conventional techniques of magnetic susceptibility, vibrating sample magnetometry (VSM) and chemical compositional analysis. Using these techniques we have studied carbonate sediment samples from Discovery Bay, Jamaica, which has been impacted to varying degrees by a bauxite loading facility. The carbonate sediment samples contain magnetic minerals ranging from moderate to low concentrations. The ESR spectra for all sites essentially contain three components. First, a six-line spectra centred around g = 2 resulting from Mn2+ ions within a carbonate matrix; second a g = 4.3 signal from isolated Fe3+ ions incorporated as impurities within minerals such as gibbsite, kaolinite or quartz; third a ferrimagnetic resonance with a maxima at 230 mT resulting from the ferrimagnetic minerals present within the bauxite contamination. Depending upon the location of the sites within the embayment these signals vary in their relative amplitude in a systematic manner related to the degree of bauxite input. Analysis of the ESR spectral components reveals linear relationships between the amplitude of the Mn2+ and ferrimagnetic signals and total Mn and Fe concentrations. To assist in determining the origin of the ESR signals coral and bauxite reference samples were employed. Coral representative of the matrix of the sediment was taken remote from the bauxite loading facility whilst pure bauxite was collected from nearby mining facilities. We find ESR to be a very sensitive technique particularly appropriate to magnetic analysis of ferri- and para-magnetic components within environmental samples otherwise dominated by diamagnetic (carbonate) minerals. When employing typical sample masses of 200 mg the practical detection limit of ESR to ferri- and para-magnetic minerals within a diamagnetic carbonate matrix is of the order of 1 ppm and 1 ppb respectively, approximately 102 and 105 times the sensitivity achievable employing the VSM in our laboratory.

  3. Synchronization from Second Order Network Connectivity Statistics

    PubMed Central

    Zhao, Liqiong; Beverlin, Bryce; Netoff, Theoden; Nykamp, Duane Q.

    2011-01-01

    We investigate how network structure can influence the tendency for a neuronal network to synchronize, or its synchronizability, independent of the dynamical model for each neuron. The synchrony analysis takes advantage of the framework of second order networks, which defines four second order connectivity statistics based on the relative frequency of two-connection network motifs. The analysis identifies two of these statistics, convergent connections, and chain connections, as highly influencing the synchrony. Simulations verify that synchrony decreases with the frequency of convergent connections and increases with the frequency of chain connections. These trends persist with simulations of multiple models for the neuron dynamics and for different types of networks. Surprisingly, divergent connections, which determine the fraction of shared inputs, do not strongly influence the synchrony. The critical role of chains, rather than divergent connections, in influencing synchrony can be explained by their increasing the effective coupling strength. The decrease of synchrony with convergent connections is primarily due to the resulting heterogeneity in firing rates. PMID:21779239

  4. Ethical Sensitivity in Nursing Ethical Leadership: A Content Analysis of Iranian Nurses Experiences

    PubMed Central

    Esmaelzadeh, Fatemeh; Abbaszadeh, Abbas; Borhani, Fariba; Peyrovi, Hamid

    2017-01-01

    Background: Considering that many nursing actions affect other people’s health and life, sensitivity to ethics in nursing practice is highly important to ethical leaders as a role model. Objective: The study aims to explore ethical sensitivity in ethical nursing leaders in Iran. Method: This was a qualitative study based on the conventional content analysis in 2015. Data were collected using deep and semi-structured interviews with 20 Iranian nurses. The participants were chosen using purposive sampling. Data were analyzed using conventional content analysis. In order to increase the accuracy and integrity of the data, Lincoln and Guba's criteria were considered. Results: Fourteen sub-categories and five main categories emerged. Main categories consisted of sensitivity to care, sensitivity to errors, sensitivity to communication, sensitivity in decision making and sensitivity to ethical practice. Conclusion: Ethical sensitivity appears to be a valuable attribute for ethical nurse leaders, having an important effect on various aspects of professional practice and help the development of ethics in nursing practice. PMID:28584564

  5. Structural and molecular conformation of myosin in intact muscle fibers by second harmonic generation

    NASA Astrophysics Data System (ADS)

    Nucciotti, V.; Stringari, C.; Sacconi, L.; Vanzi, F.; Linari, M.; Piazzesi, G.; Lombardi, V.; Pavone, F. S.

    2009-02-01

    Recently, the use of Second Harmonic Generation (SHG) for imaging biological samples has been explored with regard to intrinsic SHG in highly ordered biological samples. As shown by fractional extraction of proteins, myosin is the source of SHG signal in skeletal muscle. SHG is highly dependent on symmetries and provides selective information on the structural order and orientation of the emitting proteins and the dynamics of myosin molecules responsible for the mechano-chemical transduction during contraction. We characterise the polarization-dependence of SHG intensity in three different physiological states: resting, rigor and isometric tetanic contraction in a sarcomere length range between 2.0 μm and 4.0 μm. The orientation of motor domains of the myosin molecules is dependent on their physiological states and modulate the SHG signal. We can discriminate the orientation of the emitting dipoles in four different molecular conformations of myosin heads in intact fibers during isometric contraction, in resting and rigor. We estimate the contribution of the myosin motor domain to the total second order bulk susceptibility from its molecular structure and its functional conformation. We demonstrate that SHG is sensitive to the fraction of ordered myosin heads by disrupting the order of myosin heads in rigor with an ATP analog. We estimate the fraction of myosin motors generating the isometric force in the active muscle fiber from the dependence of the SHG modulation on the degree of overlap between actin and myosin filaments during an isometric contraction.

  6. Validation of a multi-criteria evaluation model for animal welfare.

    PubMed

    Martín, P; Czycholl, I; Buxadé, C; Krieter, J

    2017-04-01

    The aim of this paper was to validate an alternative multi-criteria evaluation system to assess animal welfare on farms based on the Welfare Quality® (WQ) project, using an example of welfare assessment of growing pigs. This alternative methodology aimed to be more transparent for stakeholders and more flexible than the methodology proposed by WQ. The WQ assessment protocol for growing pigs was implemented to collect data in different farms in Schleswig-Holstein, Germany. In total, 44 observations were carried out. The aggregation system proposed in the WQ protocol follows a three-step aggregation process. Measures are aggregated into criteria, criteria into principles and principles into an overall assessment. This study focussed on the first two steps of the aggregation. Multi-attribute utility theory (MAUT) was used to produce a value of welfare for each criterion and principle. The utility functions and the aggregation function were constructed in two separated steps. The MACBETH (Measuring Attractiveness by a Categorical-Based Evaluation Technique) method was used for utility function determination and the Choquet integral (CI) was used as an aggregation operator. The WQ decision-makers' preferences were fitted in order to construct the utility functions and to determine the CI parameters. The validation of the MAUT model was divided into two steps, first, the results of the model were compared with the results of the WQ project at criteria and principle level, and second, a sensitivity analysis of our model was carried out to demonstrate the relative importance of welfare measures in the different steps of the multi-criteria aggregation process. Using the MAUT, similar results were obtained to those obtained when applying the WQ protocol aggregation methods, both at criteria and principle level. Thus, this model could be implemented to produce an overall assessment of animal welfare in the context of the WQ protocol for growing pigs. Furthermore, this methodology could also be used as a framework in order to produce an overall assessment of welfare for other livestock species. Two main findings are obtained from the sensitivity analysis, first, a limited number of measures had a strong influence on improving or worsening the level of welfare at criteria level and second, the MAUT model was not very sensitive to an improvement in or a worsening of single welfare measures at principle level. The use of weighted sums and the conversion of disease measures into ordinal scores should be reconsidered.

  7. High-Order Hyperbolic Residual-Distribution Schemes on Arbitrary Triangular Grids

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2015-01-01

    In this paper, we construct high-order hyperbolic residual-distribution schemes for general advection-diffusion problems on arbitrary triangular grids. We demonstrate that the second-order accuracy of the hyperbolic schemes can be greatly improved by requiring the scheme to preserve exact quadratic solutions. We also show that the improved second-order scheme can be easily extended to third-order by further requiring the exactness for cubic solutions. We construct these schemes based on the LDA and the SUPG methodology formulated in the framework of the residual-distribution method. For both second- and third-order-schemes, we construct a fully implicit solver by the exact residual Jacobian of the second-order scheme, and demonstrate rapid convergence of 10-15 iterations to reduce the residuals by 10 orders of magnitude. We demonstrate also that these schemes can be constructed based on a separate treatment of the advective and diffusive terms, which paves the way for the construction of hyperbolic residual-distribution schemes for the compressible Navier-Stokes equations. Numerical results show that these schemes produce exceptionally accurate and smooth solution gradients on highly skewed and anisotropic triangular grids, including curved boundary problems, using linear elements. We also present Fourier analysis performed on the constructed linear system and show that an under-relaxation parameter is needed for stabilization of Gauss-Seidel relaxation.

  8. Latent dimensions of social anxiety disorder: A re-evaluation of the Social Phobia Inventory (SPIN).

    PubMed

    Campbell-Sills, Laura; Espejo, Emmanuel; Ayers, Catherine R; Roy-Byrne, Peter; Stein, Murray B

    2015-12-01

    The Social Phobia Inventory (SPIN; Connor et al., 2000) is a well-validated instrument for assessing severity of social anxiety disorder (SAD). However, evaluations of its factor structure have produced inconsistent results and this aspect of the scale requires further study. Primary care patients with SAD (N=397) completed the SPIN as part of baseline assessment for the Coordinated Anxiety Learning and Management study (Roy-Byrne et al., 2010). These data were used for exploratory and confirmatory factor analysis of the SPIN. A 3-factor model provided the best fit for the data and factors were interpreted as Fear of Negative Evaluation, Fear of Physical Symptoms, and Fear of Uncertainty in Social Situations. Tests of a second-order model showed that the three factors loaded strongly on a single higher-order factor that was labeled Social Anxiety. Findings are consistent with theories identifying Fear of Negative Evaluation as the core feature of SAD, and with evidence that anxiety sensitivity and intolerance of uncertainty further contribute to SAD severity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.

    1989-01-01

    The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.

  10. Suboptimal and optimal order policies for fixed and varying replenishment interval with declining market

    NASA Astrophysics Data System (ADS)

    Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon

    2016-06-01

    One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.

  11. Thermoluminescence glow curve deconvolution and trapping parameters determination of dysprosium doped magnesium borate glass

    NASA Astrophysics Data System (ADS)

    Salama, E.; Soliman, H. A.

    2018-07-01

    In this paper, thermoluminescence glow curves of gamma irradiated magnesium borate glass doped with dysprosium were studied. The number of interfering peaks and in turn the number of electron trap levels are determined using the Repeated Initial Rise (RIR) method. At different heating rates (β), the glow curves were deconvoluted into two interfering peaks based on the results of RIR method. Kinetic parameters such as trap depth, kinetic order (b) and frequency factor (s) for each electron trap level is determined using the Peak Shape (PS) method. The obtained results indicated that, the magnesium borate glass doped with dysprosium has two electron trap levels with the average depth energies of 0.63 and 0.79 eV respectively. These two traps have second order kinetic and are formed at low temperature region. The obtained results due to the glow curve analysis could be used to explain some observed properties such as, high thermal fading and light sensitivity for such thermoluminescence material. In this work, systematic procedures to determine the kinetic parameters of any thermoluminescence material are successfully introduced.

  12. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  13. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  14. Optical analysis of the star-tracker telescope for Gravity Probe

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    A ray tracing modeling of the star tracker telescope for Gravity Probe was used to predict the character of the output signal and its sensitivity to fabrication errors. In particular, the impact of the optical subsystem on the requirement of 1 milliarc second signal linearity over a + or - 50 milliarc second range was examined. Photomultiplier and solid state detector options were considered. Recommendations are made.

  15. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  16. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  17. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  18. An improved finite-difference analysis of uncoupled vibrations of tapered cantilever beams

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1983-01-01

    An improved finite difference procedure for determining the natural frequencies and mode shapes of tapered cantilever beams undergoing uncoupled vibrations is presented. Boundary conditions are derived in the form of simple recursive relations involving the second order central differences. Results obtained by using the conventional first order central differences and the present second order central differences are compared, and it is observed that the present second order scheme is more efficient than the conventional approach. An important advantage offered by the present approach is that the results converge to exact values rapidly, and thus the extrapolation of the results is not necessary. Consequently, the basic handicap with the classical finite difference method of solution that requires the Richardson's extrapolation procedure is eliminated. Furthermore, for the cases considered herein, the present approach produces consistent lower bound solutions.

  19. Research on the sonic boom problem. Part 1: Second-order solutions for the flow field around slender bodies in supersonic flow for sonic boom analysis

    NASA Technical Reports Server (NTRS)

    Landahl, M.; Loefgren, P.

    1973-01-01

    A second-order theory for supersonic flow past slender bodies is presented. Through the introduction of characteristic coordinates as independent variables and the expansion procedure proposed by Lin and Oswatitsch, a uniformly valid solution is obtained for the whole flow field in the axisymmetric case and for far field in the general three-dimensional case. For distances far from the body the theory is an extension of Whitham's first-order solution and for the domain close to the body it is a modification of Van Dyke's second-order solution in the axisymmetric case. From the theory useful formulas relating flow deflections to the Whitham F-function are derived, which permits one to determine the sonic boom strength from wind tunnel measurements fairly close to the body.

  20. A second-order all-digital phase-locked loop

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Tegnelia, C. R.

    1974-01-01

    A simple second-order digital phase-locked loop has been designed to synchronize itself to a square-wave subcarrier. Analysis and experimental performance are given for both acquisition behavior and steady-state phase error performance. In addition, the damping factor and the noise bandwidth are derived analytically. Although all the data are given for the square-wave subcarrier case, the results are applicable to arbitrary subcarriers that are odd symmetric about their transition region.

  1. A quarantine protocol for analysis of returned extraterrestrial samples

    NASA Technical Reports Server (NTRS)

    Bagby, J. R.; Sweet, H. C.; Devincenzi, D. L.

    1983-01-01

    A protocol is presented for the analysis at an earth-orbiting quarantine facility of return samples of extraterrestrial material that might contain (nonterrestrial) life forms. The protocol consists of a series of tests designed to determine whether the sample, conceptualized as a 1-kg sample of Martian soil, is free from nonterrestrial biologically active agents and so may safely be sent to a terrestrial containment facility, or it exhibits biological activity requiring further (second-order) testing outside the biosphere. The first-order testing procedure seeks to detect the presence of any replicating organisms or toxic substances through a series of experiments including gas sampling, analysis of radioactivity, stereomicroscopic inspection, chemical analysis, microscopic examination, the search for metabolic products under growth conditions, microbiologicl assays, and the challenge of cultured cells with any agents found or with the extraterrestrial material as is. Detailed plans for the second-order testing would be developed in response to the actual data received from primary testing.

  2. Two-step Raman spectroscopy method for tumor diagnosis

    NASA Astrophysics Data System (ADS)

    Zakharov, V. P.; Bratchenko, I. A.; Kozlov, S. V.; Moryatov, A. A.; Myakinin, O. O.; Artemyev, D. N.

    2014-05-01

    Two-step Raman spectroscopy phase method was proposed for differential diagnosis of malignant tumor in skin and lung tissue. It includes detection of malignant tumor in healthy tissue on first step with identification of concrete cancer type on the second step. Proposed phase method analyze spectral intensity alteration in 1300-1340 and 1640-1680 cm-1 Raman bands in relation to the intensity of the 1450 cm-1 band on first step, and relative differences between RS intensities for tumor area and healthy skin closely adjacent to the lesion on the second step. It was tested more than 40 ex vivo samples of lung tissue and more than 50 in vivo skin tumors. Linear Discriminant Analysis, Quadratic Discriminant Analysis and Support Vector Machine were used for tumors type classification on phase planes. It is shown that two-step phase method allows to reach 88.9% sensitivity and 87.8% specificity for malignant melanoma diagnosis (skin cancer); 100% sensitivity and 81.5% specificity for adenocarcinoma diagnosis (lung cancer); 90.9% sensitivity and 77.8% specificity for squamous cell carcinoma diagnosis (lung cancer).

  3. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  4. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  5. Testing life history predictions in a long-lived seabird: A population matrix approach with improved parameter estimation

    USGS Publications Warehouse

    Doherty, P.F.; Schreiber, E.A.; Nichols, J.D.; Hines, J.E.; Link, W.A.; Schenk, G.A.; Schreiber, R.W.

    2004-01-01

    Life history theory and associated empirical generalizations predict that population growth rate (λ) in long-lived animals should be most sensitive to adult survival; the rates to which λ is most sensitive should be those with the smallest temporal variances; and stochastic environmental events should most affect the rates to which λ is least sensitive. To date, most analyses attempting to examine these predictions have been inadequate, their validity being called into question by problems in estimating parameters, problems in estimating the variability of parameters, and problems in measuring population sensitivities to parameters. We use improved methodologies in these three areas and test these life-history predictions in a population of red-tailed tropicbirds (Phaethon rubricauda). We support our first prediction that λ is most sensitive to survival rates. However the support for the second prediction that these rates have the smallest temporal variance was equivocal. Previous support for the second prediction may be an artifact of a high survival estimate near the upper boundary of 1 and not a result of natural selection canalizing variances alone. We did not support our third prediction that effects of environmental stochasticity (El Niño) would most likely be detected in vital rates to which λ was least sensitive and which are thought to have high temporal variances. Comparative data-sets on other seabirds, within and among orders, and in other locations, are needed to understand these environmental effects.

  6. Stability boundaries of a rotating cantilever beam with end mass under a transverse follower excitation

    NASA Astrophysics Data System (ADS)

    Kar, R. C.; Sujata, T.

    1992-04-01

    Simple and combination resonances of a rotating cantilever beam with an end mass subjected to a transverse follower parametric excitation have been studied. The method of multiple scales is used to obtain the resonance zones of the first and second order for various values of the system parameters. It is concluded that first order combination resonances of sum- and difference-type are predominant. Higher tip mass and inertia parameters may either stabilize or destabilize the system. The increase of rotational speed, hub radius, and warping rigidity makes the beam less sensitive to periodic forces.

  7. New infrastructure for studies of transmutation and fast systems concepts

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-09-01

    In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  8. A low power ADS for transmutation studies in fast systems

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-12-01

    In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  9. Topological signatures of interstellar magnetic fields - I. Betti numbers and persistence diagrams

    NASA Astrophysics Data System (ADS)

    Makarenko, Irina; Shukurov, Anvar; Henderson, Robin; Rodrigues, Luiz F. S.; Bushby, Paul; Fletcher, Andrew

    2018-04-01

    The interstellar medium (ISM) is a magnetized system in which transonic or supersonic turbulence is driven by supernova explosions. This leads to the production of intermittent, filamentary structures in the ISM gas density, whilst the associated dynamo action also produces intermittent magnetic fields. The traditional theory of random functions, restricted to second-order statistical moments (or power spectra), does not adequately describe such systems. We apply topological data analysis (TDA), sensitive to all statistical moments and independent of the assumption of Gaussian statistics, to the gas density fluctuations in a magnetohydrodynamic simulation of the multiphase ISM. This simulation admits dynamo action, so produces physically realistic magnetic fields. The topology of the gas distribution, with and without magnetic fields, is quantified in terms of Betti numbers and persistence diagrams. Like the more standard correlation analysis, TDA shows that the ISM gas density is sensitive to the presence of magnetic fields. However, TDA gives us important additional information that cannot be obtained from correlation functions. In particular, the Betti numbers per correlation cell are shown to be physically informative. Magnetic fields make the ISM more homogeneous, reducing the abundance of both isolated gas clouds and cavities, with a stronger effect on the cavities. Remarkably, the modification of the gas distribution by magnetic fields is captured by the Betti numbers even in regions more than 300 pc from the mid-plane, where the magnetic field is weaker and correlation analysis fails to detect any signatures of magnetic effects.

  10. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  11. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  12. Order statistics applied to the most massive and most distant galaxy clusters

    NASA Astrophysics Data System (ADS)

    Waizmann, J.-C.; Ettori, S.; Bartelmann, M.

    2013-06-01

    In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.

  13. Cost and sensitivity of restricted active-space calculations of metal L-edge X-ray absorption spectra.

    PubMed

    Pinjari, Rahul V; Delcey, Mickaël G; Guo, Meiyuan; Odelius, Michael; Lundberg, Marcus

    2016-02-15

    The restricted active-space (RAS) approach can accurately simulate metal L-edge X-ray absorption spectra of first-row transition metal complexes without the use of any fitting parameters. These characteristics provide a unique capability to identify unknown chemical species and to analyze their electronic structure. To find the best balance between cost and accuracy, the sensitivity of the simulated spectra with respect to the method variables has been tested for two models, [FeCl6 ](3-) and [Fe(CN)6 ](3-) . For these systems, the reference calculations give deviations, when compared with experiment, of ≤1 eV in peak positions, ≤30% for the relative intensity of major peaks, and ≤50% for minor peaks. When compared with these deviations, the simulated spectra are sensitive to the number of final states, the inclusion of dynamical correlation, and the ionization potential electron affinity shift, in addition to the selection of the active space. The spectra are less sensitive to the quality of the basis set and even a double-ζ basis gives reasonable results. The inclusion of dynamical correlation through second-order perturbation theory can be done efficiently using the state-specific formalism without correlating the core orbitals. Although these observations are not directly transferable to other systems, they can, together with a cost analysis, aid in the design of RAS models and help to extend the use of this powerful approach to a wider range of transition metal systems. © 2015 Wiley Periodicals, Inc.

  14. Absorption measurements of the second overtone band of NO in ambient and combustion gases with a 1.8-mum room-temperature diode laser.

    PubMed

    Sonnenfroh, D M; Allen, M G

    1997-10-20

    We describe the development of a room-temperature diode sensor for in situ monitoring of combustion-generated NO. The sensor is based on a near-IR diode laser operating near 1.8 mum, which probes isolated transitions in the second overtone (3, 0) absorption band of NO. Based on absorption cell data, the sensitivity for ambient atmospheric pressure conditions is of the order of 30 parts in 10(6) by volume for a meter path (ppmv-m), assuming a minimum measurable absorbance of 10(-5). Initial H(2) -air flame measurements are complicated by strong water vapor absorption features that constrain the available gain and dynamic range of the present detection system. Preliminary results suggest that detection limits in this environment of the order of 140 ppmv-m could be achieved with optimum baseline correction.

  15. Absorption measurements of the second overtone band of NO in ambient and combustion gases with a 1.8- m room-temperature diode laser

    NASA Astrophysics Data System (ADS)

    Sonnenfroh, David M.; Allen, Mark G.

    1997-10-01

    We describe the development of a room-temperature diode sensor for in situ monitoring of combustion-generated NO. The sensor is based on a near-IR diode laser operating near 1.8 m, which probes isolated transitions in the second overtone (3,0) absorption band of NO. Based on absorption cell data, the sensitivity for ambient atmospheric pressure conditions is of the order of 30 parts in 10 6 by volume for a meter path (ppmv m), assuming a minimum measurable absorbance of 10 5 . Initial H 2 air flame measurements are complicated by strong water vapor absorption features that constrain the available gain and dynamic range of the present detection system. Preliminary results suggest that detection limits in this environment of the order of 140 ppmv m could be achieved with optimum baseline correction.

  16. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    NASA Astrophysics Data System (ADS)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  17. Chemical bond imaging using higher eigenmodes of tuning fork sensors in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Ebeling, Daniel; Zhong, Qigang; Ahles, Sebastian; Chi, Lifeng; Wegner, Hermann A.; Schirmeisen, André

    2017-05-01

    We demonstrate the ability of resolving the chemical structure of single organic molecules using non-contact atomic force microscopy with higher normal eigenmodes of quartz tuning fork sensors. In order to achieve submolecular resolution, CO-functionalized tips at low temperatures are used. The tuning fork sensors are operated in ultrahigh vacuum in the frequency modulation mode by exciting either their first or second eigenmode. Despite the high effective spring constant of the second eigenmode (on the order of several tens of kN/m), the force sensitivity is sufficiently high to achieve atomic resolution above the organic molecules. This is observed for two different tuning fork sensors with different tip geometries (small tip vs. large tip). These results represent an important step towards resolving the chemical structure of single molecules with multifrequency atomic force microscopy techniques where two or more eigenmodes are driven simultaneously.

  18. Novel techniques for optical sensor using single core multi-layer structures for electric field detection

    NASA Astrophysics Data System (ADS)

    Ali, Amir R.; Kamel, Mohamed A.

    2017-05-01

    This paper studies the effect of the electrostriction force on the single optical dielectric core coated with multi-layers based on whispering gallery mode (WGM). The sensing element is a dielectric core made of polymeric material coated with multi-layers having different dielectric and mechanical properties. The external electric field deforming the sensing element causing shifts in its WGM spectrum. The multi-layer structures will enhance the body and the pressure forces acting on the core of the sensing element. Due to the gradient on the dielectric permittivity; pressure forces at the interface between every two layers will be created. Also, the gradient on Young's modulus will affect the overall stiffness of the optical sensor. In turn the sensitivity of the optical sensor to the electric field will be increased when the materials of each layer selected properly. A mathematical model is used to test the effect for that multi-layer structures. Two layering techniques are considered to increase the sensor's sensitivity; (i) Pressure force enhancement technique; and (ii) Young's modulus reduction technique. In the first technique, Young's modulus is kept constant for all layers, while the dielectric permittivity is varying. In this technique the results will be affected by the value dielectric permittivity of the outer medium surrounding the cavity. If the medium's dielectric permittivity is greater than that of the cavity, then the ascending ordered layers of the cavity will yield the highest sensitivity (the core will have the smallest dielectric permittivity) to the applied electric field and vice versa. In the second technique, Young's modulus is varying along the layers, while the dielectric permittivity has a certain constant value per layer. On the other hand, the descending order will enhance the sensitivity in the second technique. Overall, results show the multi-layer cavity based on these techniques will enhance the sensitivity compared to the typical polymeric optical sensor.

  19. Nonlinear histogram binning for quantitative analysis of lung tissue fibrosis in high-resolution CT data

    NASA Astrophysics Data System (ADS)

    Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.

    2007-03-01

    Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.

  20. Quality assurance in RT-PCR-based BCR/ABL diagnostics--results of an interlaboratory test and a standardization approach.

    PubMed

    Burmeister, T; Maurer, J; Aivado, M; Elmaagacli, A H; Grünebach, F; Held, K R; Hess, G; Hochhaus, A; Höppner, W; Lentes, K U; Lübbert, M; Schäfer, K L; Schafhausen, P; Schmidt, C A; Schüler, F; Seeger, K; Seelig, R; Thiede, C; Viehmann, S; Weber, C; Wilhelm, S; Christmann, A; Clement, J H; Ebener, U; Enczmann, J; Leo, R; Schleuning, M; Schoch, R; Thiel, E

    2000-10-01

    Here we describe the results of an interlaboratory test for RT-PCR-based BCR/ABL analysis. The test was organized in two parts. The number of participating laboratories in the first and second part was 27 and 20, respectively. In the first part samples containing various concentrations of plasmids with the ela2, b2a2 or b3a2 BCR/ABL transcripts were analyzed by PCR. In the second part of the test, cell samples containing various concentrations of BCR/ABL-positive cells were analyzed by RT-PCR. Overall PCR sensitivity was sufficient in approximately 90% of the tests, but a significant number of false positive results were obtained. There were significant differences in sensitivity in the cell-based analysis between the various participants. The results are discussed, and proposals are made regarding the choice of primers, controls, conditions for RNA extraction and reverse transcription.

  1. Enhanced bimolecular exchange reaction through programmed coordination of a five-coordinate oxovanadium complex for efficient redox mediation in dye-sensitized solar cells.

    PubMed

    Oyaizu, Kenichi; Hayo, Noriko; Sasada, Yoshito; Kato, Fumiaki; Nishide, Hiroyuki

    2013-12-07

    Electrochemical reversibility and fast bimolecular exchange reaction found for VO(salen) gave rise to a highly efficient redox mediation to enhance the photocurrent of a dye-sensitized solar cell, leading to an excellent photovoltaic performance with a conversion efficiency of 5.4%. A heterogeneous electron-transfer rate constant at an electrode (k0) and a second-order rate constant for an electron self-exchange reaction (k(ex)) were proposed as key parameters that dominate the charge transport property, which afforded a novel design concept for the mediators based on their kinetic aspects.

  2. Epidemiology of Chronic Wasting Disease: PrPres Detection, Shedding, and Environmental Contamination

    DTIC Science & Technology

    2008-08-01

    encephalopathies (TSEs) in that it occurs in free- ranging as well as captive wild ruminants and environmental contamination appears to play a...sensitivity and high specificity and second, dogma suggests that current assays for the detection of PrPres utilize protease digestion . Proving a highly...to achieve specificity as samples require protease digestion , protein precipitation, or extensive processing in order to distinguish PrPres from the

  3. Rate and reaction probability of the surface reaction between ozone and dihydromyrcenol measured in a bench scale reactor and a room-sized chamber

    NASA Astrophysics Data System (ADS)

    Shu, Shi; Morrison, Glenn C.

    2012-02-01

    Low volatility terpenoids emitted from consumer products can react with ozone on surfaces and may significantly alter concentrations of ozone, terpenoids and reaction products in indoor air. We measured the reaction probability and a second-order surface-specific reaction rate for the ozonation of dihydromyrcenol, a representative indoor terpenoid, adsorbed onto polyvinylchloride (PVC), glass, and latex paint coated spheres. The reaction probability ranged from (0.06-8.97) × 10 -5 and was very sensitive to humidity, substrate and mass adsorbed. The average surface reaction probability is about 10 times greater than that for the gas-phase reaction. The second-order surface-specific rate coefficient ranged from (0.32-7.05) × 10 -15 cm 4 s -1 molecule -1and was much less sensitive to humidity, substrate, or mass adsorbed. We also measured the ozone deposition velocity due to adsorbed dihydromyrcenol on painted drywall in a room-sized chamber, Based on that, we calculated the rate coefficient ((0.42-1.6) × 10 -15 cm 4 molecule -1 s -1), which was consistent with that derived from bench-scale experiments for the latex paint under similar conditions. We predict that more than 95% of dihydromyrcenol oxidation takes place on indoor surfaces, rather than in building air.

  4. Pion single and double charge exchange in the resonance region: Dynamical corrections

    NASA Astrophysics Data System (ADS)

    Johnson, Mikkel B.; Siciliano, E. R.

    1983-04-01

    We consider pion-nucleus elastic scattering and single- and double-charge-exchange scattering to isobaric analog states near the (3,3) resonance within an isospin invariant framework. We extend previous theories by introducing terms into the optical potential U that are quadratic in density and consistent with isospin invariance of the strong interaction. We study the sensitivity of single and double charge exchange angular distributions to parameters of the second-order potential both numerically, by integrating the Klein-Gordon equation, and analytically, by using semiclassical approximations that explicate the dependence of the exact numerical results to the parameters of U. The magnitude and shape of double charge exchange angular distributions are more sensitive to the isotensor term in U than has been hitherto appreciated. An examination of recent experimental data shows that puzzles in the shape of the 18O(π+, π-)18Ne angular distribution at 164 MeV and in the A dependence of the forward double charge exchange scattering on 18O, 26Mg, 42Ca, and 48Ca at the same energy may be resolved by adding an isotensor term in U. NUCLEAR REACTIONS Scattering theory for elastic, single-, and double-charge-exchange scattering to IAS in the region of the P33 resonance. Second-order effects on charge-exchange calculations of σ(A, θ).

  5. Newborn screening for citrin deficiency and carnitine uptake defect using second-tier molecular tests.

    PubMed

    Wang, Li-Yun; Chen, Nien-I; Chen, Pin-Wen; Chiang, Shu-Chuan; Hwu, Wuh-Liang; Lee, Ni-Chung; Chien, Yin-Hsiu

    2013-02-10

    Tandem mass spectrometry (MS/MS) analysis is a powerful tool for newborn screening, and many rare inborn errors of metabolism are currently screened using MS/MS. However, the sensitivity of MS/MS screening for several inborn errors, including citrin deficiency (screened by citrulline level) and carnitine uptake defect (CUD, screened by free carnitine level), is not satisfactory. This study was conducted to determine whether a second-tier molecular test could improve the sensitivity of citrin deficiency and CUD detection without increasing the false-positive rate. Three mutations in the SLC25A13 gene (for citrin deficiency) and one mutation in the SLC22A5 gene (for CUD) were analyzed in newborns who demonstrated an inconclusive primary screening result (with levels between the screening and diagnostic cutoffs). The results revealed that 314 of 46 699 newborns received a second-tier test for citrin deficiency, and two patients were identified; 206 of 30 237 newborns received a second-tier testing for CUD, and one patient was identified. No patients were identified using the diagnostic cutoffs. Although the incidences for citrin deficiency (1:23 350) and CUD (1:30 000) detected by screening are still lower than the incidences calculated from the mutation carrier rates, the second-tier molecular test increases the sensitivity of newborn screening for citrin deficiency and CUD without increasing the false-positive rate. Utilizing a molecular second-tier test for citrin deficiency and carnitine transporter deficiency is feasible.

  6. Micro-/nanoscale multi-field coupling in nonlinear photonic devices

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Wang, Yubo; Tang, Mingwei; Xu, Pengfei; Xu, Yingke; Liu, Xu

    2017-08-01

    The coupling of mechanics/electronics/photonics may improve the performance of nanophotonic devices not only in the linear region but also in the nonlinear region. This review letter mainly presents the recent advances on multi-field coupling in nonlinear photonic devices. The nonlinear piezoelectric effect and piezo-phototronic effects in quantum wells and fibers show that large second-order nonlinear susceptibilities can be achieved, and second harmonic generation and electro-optic modulation can be enhanced and modulated. Strain engineering can tune the lattice structures and induce second order susceptibilities in central symmetry semiconductors. By combining the absorption-based photoacoustic effect and intensity-dependent photobleaching effect, subdiffraction imaging can be achieved. This review will also discuss possible future applications of these novel effects and the perspective of their research. The review can help us develop a deeper knowledge of the substance of photon-electron-phonon interaction in a micro-/nano- system. Moreover, it can benefit the design of nonlinear optical sensors and imaging devices with a faster response rate, higher efficiency, more sensitivity and higher spatial resolution which could be applied in environmental detection, bio-sensors, medical imaging and so on.

  7. Central sensitization in chronic low back pain: A narrative review.

    PubMed

    Sanzarello, Ilaria; Merlini, Luciano; Rosa, Michele Attilio; Perrone, Mariada; Frugiuele, Jacopo; Borghi, Raffaele; Faldini, Cesare

    2016-11-21

    Low back pain is one of the four most common disorders in all regions, and the greatest contributor to disability worldwide, adding 10.7% of total years lost due to this health state. The etiology of chronic low back pain is, in most of the cases (up to 85%), unknown or nonspecific, while the specific causes (specific spinal pathology and neuropathic/radicular disorders) are uncommon. Central sensitization has been recently recognized as a potential pathophysiological mechanism underlying a group of chronic pain conditions, and may be a contributory factor for a sub-group of patients with chronic low back pain. The purposes of this narrative review are twofold. First, to describe central sensitization and its symptoms and signs in patients with chronic pain disorders in order to allow its recognition in patients with nonspecific low back pain. Second, to provide general treatment principles of chronic low back pain with particular emphasis on pharmacotherapy targeting central sensitization.

  8. Computation of the stability derivatives via CFD and the sensitivity equations

    NASA Astrophysics Data System (ADS)

    Lei, Guo-Dong; Ren, Yu-Xin

    2011-04-01

    The method to calculate the aerodynamic stability derivates of aircrafts by using the sensitivity equations is extended to flows with shock waves in this paper. Using the newly developed second-order cell-centered finite volume scheme on the unstructured-grid, the unsteady Euler equations and sensitivity equations are solved simultaneously in a non-inertial frame of reference, so that the aerodynamic stability derivatives can be calculated for aircrafts with complex geometries. Based on the numerical results, behavior of the aerodynamic sensitivity parameters near the shock wave is discussed. Furthermore, the stability derivatives are analyzed for supersonic and hypersonic flows. The numerical results of the stability derivatives are found in good agreement with theoretical results for supersonic flows, and variations of the aerodynamic force and moment predicted by the stability derivatives are very close to those obtained by CFD simulation for both supersonic and hypersonic flows.

  9. Creativity as a Key Driver for Designing Context Sensitive Health Informatics.

    PubMed

    Zhou, Chunfang; Nøhr, Christian

    2017-01-01

    In order to face the increasing challenges of complexity and uncertainty in practice of health care, this paper aims to discuss how creativity can contribute to design new technologies in health informatics systems. It will firstly introduce the background highlighting creativity as a missing element in recent studies on context sensitive health informatics. Secondly, the concept of creativity and its relationship with activities of technology design will be discussed from a socio-culture perspective. This will be thirdly followed by understanding the roles of creativity in designing new health informatics technologies for meeting needs of high context sensitivity. Finally, a series of potential strategies will be suggested to improve creativity among technology designers working in healthcare industries. Briefly, this paper innovatively bridges two areas studies on creativity and context sensitive health informatics by issues of technology design that also indicates its important significances for future research.

  10. Study on the ternary mixed ligand complex of palladium(II)-aminophylline-fluorescein sodium by resonance Rayleigh scattering, second-order scattering and frequency doubling scattering spectrum and its analytical application.

    PubMed

    Chen, Peili; Liu, Shaopu; Liu, Zhongfang; Hu, Xiaoli

    2011-01-01

    The interaction between palladium(II)-aminophylline and fluorescein sodium was investigated by resonance Rayleigh scattering, second-order scattering and frequency doubling scattering spectrum. In pH 4.4 Britton-Robinson (BR) buffer medium, aminophylline (Ami) reacted with palladium(II) to form chelate cation([Pd(Ami)]2+), which further reacted with fluorescein sodium (FS) to form ternary mixed ligand complex [Pd(Ami)(FS)2]. As a result, resonance Rayleigh scattering (RRS), second-order scattering (SOS) and frequency doubling scattering spectrum (FDS) were enhanced. The maximum scattering wavelengths of [Pd(Ami)(FS)2] were located at 300 nm (RRS), 650 nm (SOS) and 304 nm (FDS). The scattering intensities were proportional to the Ami concentration in a certain range and the detection limits were 7.3 ng mL(-1) (RRS), 32.9 ng mL(-1) (SOS) and 79.1 ng mL(-1) (FDS), respectively. Based on it, the new simple, rapid, and sensitive scattering methods have been proposed to determine Ami in urine and serum samples. Moreover, the formation mechanism of [Pd(Ami)(FS)2] and the reasons for enhancement of RRS were fully discussed. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.

  11. Determination of acidity and nucleophilicity in thiols by reaction with monobromobimane and fluorescence detection.

    PubMed

    Sardi, Florencia; Manta, Bruno; Portillo-Ledesma, Stephanie; Knoops, Bernard; Comini, Marcelo A; Ferrer-Sueta, Gerardo

    2013-04-01

    A method based on the differential reactivity of thiol and thiolate with monobromobimane (mBBr) has been developed to measure nucleophilicity and acidity of protein and low-molecular-weight thiols. Nucleophilicity of the thiolate is measured as the pH-independent second-order rate constant of its reaction with mBBr. The ionization constants of the thiols are obtained through the pH dependence of either second-order rate constant or initial rate of reaction. For readily available thiols, the apparent second-order rate constant is measured at different pHs and then plotted and fitted to an appropriate pH function describing the observed number of ionization equilibria. For less available thiols, such as protein thiols, the initial rate of reaction is determined in a wide range of pHs and fitted to the appropriate pH function. The method presented here shows excellent sensitivity, allowing the use of nanomolar concentrations of reagents. The method is suitable for scaling and high-throughput screening. Example determinations of nucleophilicity and pK(a) are presented for captopril and cysteine as low-molecular-weight thiols and for human peroxiredoxin 5 and Trypanosoma brucei monothiol glutaredoxin 1 as protein thiols. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps and traditional Chinese medicine Radix angelicae pubescentis using excitation-emission matrix fluorescence coupled with second-order calibration method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin

    2017-01-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.

  13. MASSIM, the Milli-Arc-Second Structure Imager

    NASA Technical Reports Server (NTRS)

    Skinner, Gerry

    2008-01-01

    The MASSIM (Milli-Arc-Second Structure Imager) mission will use a set of achromatic diffractive-refractive Fresnel lenses to achieve imaging in the X-ray band with unprecedented angular resolution. It has been proposed for study within the context of NASA's "Astrophysics Strategic Mission Concept Studies" program. Lenses on an optics spacecraft will focus 5-11 keV X-rays onto detectors on a second spacecraft flying in formation 1000 km away. It will have a point-source sensitivity comparable with that of the current generation of major X-ray observatories (Chandra, XMM-Newton) but an angular resolution some three orders of magnitude better. MASSIM is optimized for the study of jets and other phenomena that occur in the immediate vicinity of black holes and neutron stars. It can also be used for studying other phenomena on the milli-arc-second scale, such as those involving proto-stars, the surfaces and surroundings of nearby active stars and interacting winds.

  14. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR

    PubMed Central

    Dong, Xichao; Hu, Cheng; Long, Teng; Li, Yuanhao

    2016-01-01

    The geosynchronous synthetic aperture radar (GEO SAR) is susceptible to orbit perturbations, leading to orbit drifts and variations. The influences behave very differently from those in low Earth orbit (LEO) SAR. In this paper, the impacts of perturbations on GEO SAR orbital elements are modelled based on the perturbed dynamic equations, and then, the focusing is analyzed theoretically and numerically by using the Systems Tool Kit (STK) software. The accurate GEO SAR slant range histories can be calculated according to the perturbed orbit positions in STK. The perturbed slant range errors are mainly the first and second derivatives, leading to image drifts and defocusing. Simulations of the point target imaging are performed to validate the aforementioned analysis. In the GEO SAR with an inclination of 53° and an argument of perigee of 90°, the Doppler parameters and the integration time are different and dependent on the geometry configurations. Thus, the influences are varying at different orbit positions: at the equator, the first-order phase errors should be mainly considered; at the perigee and apogee, the second-order phase errors should be mainly considered; at other positions, first-order and second-order exist simultaneously. PMID:27598168

  15. UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E

    EPA Science Inventory

    A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...

  16. Noninvasive Dry Eye Assessment Using High-Technology Ophthalmic Examination Devices.

    PubMed

    Yamaguchi, Masahiko; Sakane, Yuri; Kamao, Tomoyuki; Zheng, Xiaodong; Goto, Tomoko; Shiraishi, Atsushi; Ohashi, Yuichi

    2016-11-01

    Recently, the number of dry eye cases has dramatically increased. Thus, it is important that easy screening, exact diagnoses, and suitable treatments be available. We developed 3 original and noninvasive assessments for this disorder. First, a DR-1 dry eye monitor was used to determine the tear meniscus height quantitatively by capturing a tear meniscus digital image that was analyzed by Meniscus Processor software. The DR-1 meniscus height value significantly correlated with the fluorescein meniscus height (r = 0.06, Bland-Altman analysis). At a cutoff value of 0.22 mm, sensitivity of the dry eye diagnosis was 84.1% with 90.9% specificity. Second, the Tear Stability Analysis System was used to quantitatively measure tear film stability using a topographic modeling system corneal shape analysis device. Tear film stability was objectively and quantitatively evaluated every second during sustained eye openings. The Tear Stability Analysis System is currently installed in an RT-7000 autorefractometer and topographer to automate the diagnosis of dry eye. Third, the Ocular Surface Thermographer uses ophthalmic thermography for diagnosis. The decrease in ocular surface temperature in dry eyes was significantly greater than that in normal eyes (P < 0.001) at 10 seconds after eye opening. Decreased corneal temperature correlated significantly with the tear film breakup time (r = 0.572; P < 0.001). When changes in the ocular surface temperature of the cornea were used as indicators for dry eye, sensitivity was 0.83 and specificity was 0.80 after 10 seconds. This article describes the details and potential of these 3 noninvasive dry eye assessment systems.

  17. The weaker effects of First-order mean motion resonances in intermediate inclinations

    NASA Astrophysics Data System (ADS)

    Chen, YuanYuan; Quillen, Alice C.; Ma, Yuehua; Chinese Scholar Council, the National Natural Science Foundation of China, the Natural Science Foundation of Jiangsu Province, the Minor Planet Foundation of the Purple Mountain Observatory

    2017-10-01

    During planetary migration, a planet or planetesimal can be captured into a low-order mean motion resonance with another planet. Using a second-order expansion of the disturbing function in eccentricity and inclination, we explore the sensitivity of the capture probability of first-order mean motion resonances to orbital inclination. We find that second-order inclination contributions affect the resonance strengths, reducing them at intermediate inclinations of around 10-40° for major first-order resonances. We also integrated the Hamilton's equations with arbitrary initial arguments, and provided the varying tendencies of resonance capture probabilities versus orbital inclinations for different resonances and different particle or planetary eccentricities. Resonance-weaker ranges in inclinations generally appear at the places where resonance strengths are low, around 10-40° in general. The weaker ranges disappear with a higher particle eccentricity (≳0.05) or planetary eccentricity (≳0.05). These resonance-weaker ranges in inclinations implies that intermediate-inclination objects are less likely to be disturbed or captured into the first-order resonances, which would make them entering into the chaotic area around Neptune with a larger fraction than those with low inclinations, during the epoch of Neptune's outward migration. The privilege of high-inclination particles leave them to be more likely captured into Neptune Trojans, which might be responsible for the unexpected high fraction of high-inclination Neptune Trojans.

  18. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models (Open Access)

    DTIC Science & Technology

    2015-03-16

    shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear

  19. Atomic magnetometer-based ultra-sensitive magnetic microscopy

    NASA Astrophysics Data System (ADS)

    Kim, Young Jin; Savukov, Igor

    2016-03-01

    An atomic magnetometer (AM) based on lasers and alkali-metal vapor cells is currently the most sensitive non-cryogenic magnetic-field sensor. Many applications in neuroscience and other fields require high resolution, high sensitivity magnetic microscopic measurements. In order to meet this need we combined a cm-size spin-exchange relaxation-free AM with a flux guide (FG) to produce an ultra-sensitive FG-AM magnetic microscope. The FG serves to transmit the target magnetic flux to the AM thus enhancing both the sensitivity and resolution for tiny magnetic objects. In this talk, we will describe a prototype FG-AM device and present experimental and numerical tests of its sensitivity and resolution. We also demonstrate that an optimized FG-AM achieves high resolution and high sensitivity sufficient to detect a magnetic field of a single neuron in a few seconds, which would be an important milestone in neuroscience. We anticipate that this unique device can be applied to the detection of a single neuron, the detection of magnetic nano-particles, which in turn are very important for detection of target molecules in national security and medical diagnostics, and non-destructive testing.

  20. Comparison of sensitivity and reading time for the use of computer aided detection (CAD) of pulmonary nodules at MDCT as concurrent or second reader

    NASA Astrophysics Data System (ADS)

    Beyer, F.; Zierott, L.; Fallenberg, E. M.; Juergens, K.; Stoeckel, J.; Heindel, W.; Wormanns, D.

    2006-03-01

    Purpose: To compare sensitivity and reading time when using CAD as second reader resp. concurrent reader. Materials and Methods: Fifty chest MDCT scans due to clinical indication were analysed independently by four radiologists two times: First with CAD as concurrent reader (display of CAD results simultaneously to the primary reading by the radiologist); then after a median of 14 weeks with CAD as second reader (CAD results were shown after completion of a reading session without CAD). A prototype version of Siemens LungCAD (Siemens,Malvern,USA) was used. Sensitivities and reading times for detecting nodules >=4mm of concurrent reading, reading without CAD and second reading were recorded. In a consensus conference false positive findings were eliminated. Student's T-Test was used to compare sensitivities and reading times. Results: 108 true positive nodules were found. Mean sensitivity was .68 for reading without CAD, .68 for concurrent reading and .75 for second reading. Differences of sensitivities were significant between concurrent and second reading (p<.001) resp. reading without CAD and second reading (p=.001). Mean reading time for concurrent reading was significant shorter (274s) compared to reading without CAD (294s;p=.04) and second reading (337sp<.001). New work to be presented: To our knowledge this is the first study that compares sensitivities and reading times between use of CAD as concurrent resp. second reader. Conclusion: CAD can either be used to speed up reading of chest CT cases for pulmonary nodules without loss of sensitivity as concurrent reader -OR (and not AND) to increase sensitivity and reading time as second reader.

  1. Diffraction-Based Optical Switch

    NASA Technical Reports Server (NTRS)

    Sperno, Stevan M. (Inventor); Fuhr, Peter L. (Inventor); Schipper, John F. (Inventor)

    2005-01-01

    Method and system for controllably redirecting a light beam, having a central wavelength lambda, from a first light-receiving site to a second light-receiving site. A diffraction grating is attached to or part of a piezoelectric substrate, which is connected to one or two controllable voltage difference sources. When a substrate voltage difference is changed and the diffraction grating length in each of one or two directions is thereby changed, at least one of the diffraction angle, the diffraction order and the central wavelength is controllably changed. A diffracted light beam component, having a given wavelength, diffraction angle and diffraction order, that is initially received at a first light receiving site (e.g., a detector or optical fiber) is thereby controllably shifted or altered and can be received at a second light receiving site. A polynomially stepped, chirped grating is used in one embodiment. In another embodiment, an incident light beam, having at least one of first and second wavelengths, lambda1 and lambda2, is received and diffracted at a first diffraction grating to provide a first diffracted beam. The first diffracted beam is received and diffracted at a second diffraction grating to produce a second diffracted beam. The second diffracted beam is received at a light-sensitive transducer, having at least first and second spaced apart light detector elements that are positioned so that, when the incident light beam has wavelength lambda1 or lambda2 (lambda1 not equal to lambda2), the second diffracted beam is received at the first element or at the second element, respectively; change in a selected physical parameter at the second grating can also be sensed or measured. A sequence of spaced apart light detector elements can be positioned along a linear or curvilinear segment with equal or unequal spacing.

  2. Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content

    NASA Astrophysics Data System (ADS)

    Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei

    2017-12-01

    In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.

  3. Binary photonic crystal for refractometric applications (TE case)

    NASA Astrophysics Data System (ADS)

    Taya, Sofyan A.; Shaheen, Somaia A.

    2018-04-01

    In this work, a binary photonic crystal is proposed as a refractometric sensor. The dispersion relation and the sensitivity are derived for transverse electric (TE) mode. In our analysis, the first layer is considered to be the analyte layer and the second layer is assumed to be left-handed material (LHM), dielectric or metal. It is found that the sensitivity of the LHM structure is the highest among other structures. It is possible for LHM photonic crystal to achieve a sensitivity improvement of 412% compared to conventional slab waveguide sensor.

  4. Effect of local magnetic field disturbances on inertial measurement units accuracy.

    PubMed

    Robert-Lachaine, Xavier; Mecheri, Hakim; Larue, Christian; Plamondon, André

    2017-09-01

    Inertial measurement units (IMUs), a practical motion analysis technology for field acquisition, have magnetometers to improve segment orientation estimation. However, sensitivity to magnetic disturbances can affect their accuracy. The objective of this study was to determine the joint angles accuracy of IMUs under different timing of magnetic disturbances of various durations and to evaluate a few correction methods. Kinematics from 12 individuals were obtained simultaneously with an Xsens system where an Optotrak cluster acting as the reference system was affixed to each IMU. A handling task was executed under normal laboratory conditions and imposed magnetic disturbances. Joint angle RMSE was used to conduct a three-way repeated measures analysis of variance in order to contrast the following disturbance factors: duration (0, 30, 60, 120 and 240 s), timing (during the disturbance, directly after it and a 30-second delay after it) and axis (X, Y and Z). The highest joint angle RMSE was observed on rotations about the Y longitudinal axis and during the longer disturbances. It stayed high directly after a disturbance, but returned close to baseline after a 30-second delay. When magnetic disturbances are experienced, waiting 30 s in a normal condition is recommended as a way to restore the IMUs' initial accuracy. The correction methods performed modestly or poorly in the reduction of joint angle RMSE. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Efficacy and safety of cytotoxic drug chemotherapy after first-line EGFR-TKI treatment in elderly patients with non-small-cell lung cancer harboring sensitive EGFR mutations.

    PubMed

    Imai, Hisao; Minemura, Hiroyuki; Sugiyama, Tomohide; Yamada, Yutaka; Kaira, Kyoichi; Kanazawa, Kenya; Kasai, Takashi; Kaburagi, Takayuki; Minato, Koichi

    2018-05-08

    Epidermal growth factor receptor-tyrosine kinase inhibitor (EGFR-TKI) is effective as first-line chemotherapy for patients with advanced non-small-cell lung cancer (NSCLC) harboring sensitive EGFR mutations. However, whether the efficacy of second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment is similar to that of first-line cytotoxic drug chemotherapy in elderly patients aged ≥ 75 years harboring sensitive EGFR mutations is unclear. Therefore, we aimed to investigate the efficacy and safety of cytotoxic drug chemotherapy after first-line EGFR-TKI treatment in elderly patients with NSCLC harboring sensitive EGFR mutations. We retrospectively evaluated the clinical effects and safety profiles of second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment in elderly patients with NSCLC harboring sensitive EGFR mutations (exon 19 deletion/exon 21 L858R mutation). Between April 2008 and December 2015, 78 elderly patients with advanced NSCLC harboring sensitive EGFR mutations received first-line EGFR-TKI at four Japanese institutions. Baseline characteristics, regimens, responses to first- and second-line treatments, whether or not patients received subsequent treatment, and if not, the reasons for non-administration were recorded. Overall, 20 patients with a median age of 79.5 years (range 75-85 years) were included in our analysis. The overall response, disease control, median progression-free survival, and overall survival rates were 15.0, 60.0%, 2.4, and 13.2 months, respectively. Common adverse events included leukopenia, neutropenia, anemia, thrombocytopenia, malaise, and anorexia. Major grade 3 or 4 toxicities included leukopenia (25.0%) and neutropenia (45.0%). No treatment-related deaths were noted. Second-line cytotoxic drug chemotherapy after first-line EGFR-TKI treatment among elderly patients with NSCLC harboring sensitive EGFR mutations was effective and safe and showed equivalent outcomes to first-line cytotoxic drug chemotherapy.

  6. The radiated noise from isotropic turbulence revisited

    NASA Technical Reports Server (NTRS)

    Lilley, Geoffrey M.

    1993-01-01

    The noise radiated from isotropic turbulence at low Mach numbers and high Reynolds numbers, as derived by Proudman (1952), was the first application of Lighthill's Theory of Aerodynamic Noise to a complete flow field. The theory presented by Proudman involves the assumption of the neglect of retarded time differences and so replaces the second-order retarded-time and space covariance of Lighthill's stress tensor, Tij, and in particular its second time derivative, by the equivalent simultaneous covariance. This assumption is a valid approximation in the derivation of the second partial derivative of Tij/derivative of t exp 2 covariance at low Mach numbers, but is not justified when that covariance is reduced to the sum of products of the time derivatives of equivalent second-order velocity covariances as required when Gaussian statistics are assumed. The present paper removes these assumptions and finds that although the changes in the analysis are substantial, the change in the numerical result for the total acoustic power is small. The present paper also considers an alternative analysis which does not neglect retarded times. It makes use of the Lighthill relationship, whereby the fourth-order Tij retarded-time covariance is evaluated from the square of similar second order covariance, which is assumed known. In this derivation, no statistical assumptions are involved. This result, using distributions for the second-order space-time velocity squared covariance based on the Direct Numerical Simulation (DNS) results of both Sarkar and Hussaini(1993) and Dubois(1993), is compared with the re-evaluation of Proudman's original model. These results are then compared with the sound power derived from a phenomenological model based on simple approximations to the retarded-time/space covariance of Txx. Finally, the recent numerical solutions of Sarkar and Hussaini(1993) for the acoustic power are compared with the results obtained from the analytic solutions.

  7. On the accuracy of Whitham's method. [for steady ideal gas flow past cones

    NASA Technical Reports Server (NTRS)

    Zahalak, G. I.; Myers, M. K.

    1974-01-01

    The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.

  8. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  9. Study of nitrogen flowing afterglow with mercury vapor injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazánková, V., E-mail: mazankova@fch.vutbr.cz; Krčma, F.; Trunec, D.

    2014-10-21

    The reaction kinetics in nitrogen flowing afterglow with mercury vapor addition was studied by optical emission spectroscopy. The DC flowing post-discharge in pure nitrogen was created in a quartz tube at the total gas pressure of 1000 Pa and discharge power of 130 W. The mercury vapors were added into the afterglow at the distance of 30 cm behind the active discharge. The optical emission spectra were measured along the flow tube. Three nitrogen spectral systems – the first positive, the second positive, and the first negative, and after the mercury vapor addition also the mercury resonance line at 254more » nm in the spectrum of the second order were identified. The measurement of the spatial dependence of mercury line intensity showed very slow decay of its intensity and the decay rate did not depend on the mercury concentration. In order to explain this behavior, a kinetic model for the reaction in afterglow was developed. This model showed that the state Hg(6 {sup 3}P{sub 1}), which is the upper state of mercury UV resonance line at 254 nm, is produced by the excitation transfer from nitrogen N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastables to mercury atoms. However, the N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastables are also produced by the reactions following the N atom recombination, and this limits the decay of N{sub 2}(A{sup 3}Σ{sup +}{sub u}) metastable concentration and results in very slow decay of mercury resonance line intensity. It was found that N atoms are the most important particles in this late nitrogen afterglow, their volume recombination starts a chain of reactions which produce excited states of molecular nitrogen. In order to explain the decrease of N atom concentration, it was also necessary to include the surface recombination of N atoms to the model. The surface recombination was considered as a first order reaction and wall recombination probability γ = (1.35 ± 0.04) × 10{sup −6} was determined from the experimental data. Also sensitivity analysis was applied for the analysis of kinetic model in order to reveal the main control parameters in the model.« less

  10. An uncertainty and sensitivity analysis approach for GIS-based multicriteria landslide susceptibility mapping.

    PubMed

    Feizizadeh, Bakhtiar; Blaschke, Thomas

    2014-03-04

    GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.

  11. Small Launch Vehicle Concept Development for Affordable Multi-Stage Inline Configurations

    NASA Technical Reports Server (NTRS)

    Beers, Benjamin R.; Waters, Eric D.; Philips, Alan D.; Threet, Grady E., Jr.

    2014-01-01

    The Advanced Concepts Office at NASA's George C. Marshall Space Flight Center conducted a study of two configurations of a three-stage, inline, liquid propellant small launch vehicle concept developed on the premise of maximizing affordability by targeting a specific payload capability range based on current and future industry demand. The initial configuration, NESC-1, employed liquid oxygen as the oxidizer and rocket propellant grade kerosene as the fuel in all three stages. The second and more heavily studied configuration, NESC-4, employed liquid oxygen and rocket propellant grade kerosene on the first and second stages and liquid oxygen and liquid methane fuel on the third stage. On both vehicles, sensitivity studies were first conducted on specific impulse and stage propellant mass fraction in order to baseline gear ratios and drive the focus of concept development. Subsequent sensitivity and trade studies on the NESC-4 concept investigated potential impacts to affordability due to changes in gross liftoff mass and/or vehicle complexity. Results are discussed at a high level to understand the impact severity of certain sensitivities and how those trade studies conducted can either affect cost, performance, or both.

  12. Space grating optical structure of the retina and RGB-color vision.

    PubMed

    Lauinger, Norbert

    2017-02-01

    Diffraction of light at the spatial cellular phase grating outer nuclear layer of the retina could produce Fresnel near-field interferences in three RGB diffraction orders accessible to photoreceptors (cones/rods). At perpendicular light incidence the wavelengths of the RGB diffraction orders in photopic vision-a fundamental R-wave with two G+B-harmonics-correspond to the peak wavelengths of the spectral brightness sensitivity curves of the cones at 559 nmR, 537 nmG, and 447 nmB. In scotopic vision the R+G diffraction orders optically fuse at 512 nm, the peak value of the rod's spectral brightness sensitivity curve. The diffractive-optical transmission system with sender (resonator), space waves, and receiver antennae converts the spectral light components involved in imaging into RGB space. The colors seen at objects are diffractive-optical products in the eye, as the German philosopher A. Schopenhauer predicted. They are second related to the overall illumination in object space. The RGB transmission system is the missing link optically managing the spectral tuning of the RGB photopigments.

  13. Tumor tissue characterization using polarization-sensitive second harmonic generation microscopy

    NASA Astrophysics Data System (ADS)

    Tokarz, Danielle; Cisek, Richard; Golaraei, Ahmad; Krouglov, Serguei; Navab, Roya; Niu, Carolyn; Sakashita, Shingo; Yasufuku, Kazuhiro; Tsao, Ming-Sound; Asa, Sylvia L.; Barzda, Virginijus; Wilson, Brian C.

    2015-06-01

    Changes in the ultrastructure of collagen in various tumor and non-tumor human tissues including lung, pancreas and thyroid were investigated ex vivo by a polarization-sensitive second harmonic generation (SHG) microscopy technique referred to as polarization-in, polarization-out (PIPO) SHG. This involves measuring the orientation of the linear polarization of outgoing SHG as a function of the linear polarization orientation of incident laser radiation. From the PIPO SHG data, the second-order nonlinear optical susceptibility tensor component ratio, χ(2) ZZZ'/χ(2) ZXX', for each pixel of the SHG image was obtained and presented as color-coded maps. Further, the orientation of collagen fibers in the tissue was deduced. Since the χ(2) ZZZ'/χ(2) ZXX' values represent the organization of collagen in the tissue, theses maps revealed areas of altered collagen structure (not simply concentration) within tissue sections. Statistically-significant differences in χ(2) ZZZ'/χ(2) ZXX' were found between tumor and non-tumor tissues, which varied from organ to organ. Hence, PIPO SHG microscopy could potentially be used to aid pathologists in diagnosing cancer. Additionally, PIPO SHG microscopy could aid in characterizing the structure of collagen in other collagen-related biological processes such as wound repair.

  14. Collective Modes in a Trapped Gas from Second-Order Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lewis, William; Romatschke, Paul

    Navier-Stokes equations are often used to analyze collective oscillations and expansion dynamics of strongly interacting quantum gases. However, their use, for example, in precision determination of transport properties such as the ratio shear viscosity to entropy density (η / s) in strongly interacting Fermi gases problematic. Second-order hydrodynamics addresses this by promoting the viscous stress tensor to a hydrodynamic variable relaxing to the Navier-Stokes form on a timescale τπ. We derive frequencies, damping rates, and spatial structure of collective oscillations up to the decapole mode of a harmonically trapped gas in this framework. We find damping of higher-order modes (i.e. beyond quadrupolar) exhibits greater sensitivity to shear viscosity. Thus measurement of the hexapolar mode, for example, may lead to a stronger experimental constraint on η / s . Additionally, we find ``non-hydrodynamic'' modes not contained in a Navier-Stokes description. We calculate excitation amplitudes of non-hydrodynamic modes demonstrating they should be observable. Non-hydrodynamic modes may have implications for the hydrodynamization timescale, the existence of quasi-particles, and universal transport behavior in strongly interacting quantum fluids.

  15. Saturation behavior: a general relationship described by a simple second-order differential equation.

    PubMed

    Kepner, Gordon R

    2010-04-13

    The numerous natural phenomena that exhibit saturation behavior, e.g., ligand binding and enzyme kinetics, have been approached, to date, via empirical and particular analyses. This paper presents a mechanism-free, and assumption-free, second-order differential equation, designed only to describe a typical relationship between the variables governing these phenomena. It develops a mathematical model for this relation, based solely on the analysis of the typical experimental data plot and its saturation characteristics. Its utility complements the traditional empirical approaches. For the general saturation curve, described in terms of its independent (x) and dependent (y) variables, a second-order differential equation is obtained that applies to any saturation phenomena. It shows that the driving factor for the basic saturation behavior is the probability of the interactive site being free, which is described quantitatively. Solving the equation relates the variables in terms of the two empirical constants common to all these phenomena, the initial slope of the data plot and the limiting value at saturation. A first-order differential equation for the slope emerged that led to the concept of the effective binding rate at the active site and its dependence on the calculable probability the interactive site is free. These results are illustrated using specific cases, including ligand binding and enzyme kinetics. This leads to a revised understanding of how to interpret the empirical constants, in terms of the variables pertinent to the phenomenon under study. The second-order differential equation revealed the basic underlying relations that describe these saturation phenomena, and the basic mathematical properties of the standard experimental data plot. It was shown how to integrate this differential equation, and define the common basic properties of these phenomena. The results regarding the importance of the slope and the new perspectives on the empirical constants governing the behavior of these phenomena led to an alternative perspective on saturation behavior kinetics. Their essential commonality was revealed by this analysis, based on the second-order differential equation.

  16. Application of the Finite Element Method in Atomic and Molecular Physics

    NASA Technical Reports Server (NTRS)

    Shertzer, Janine

    2007-01-01

    The finite element method (FEM) is a numerical algorithm for solving second order differential equations. It has been successfully used to solve many problems in atomic and molecular physics, including bound state and scattering calculations. To illustrate the diversity of the method, we present here details of two applications. First, we calculate the non-adiabatic dipole polarizability of Hi by directly solving the first and second order equations of perturbation theory with FEM. In the second application, we calculate the scattering amplitude for e-H scattering (without partial wave analysis) by reducing the Schrodinger equation to set of integro-differential equations, which are then solved with FEM.

  17. Procedural learning in Parkinson's disease, specific language impairment, dyslexia, schizophrenia, developmental coordination disorder, and autism spectrum disorders: A second-order meta-analysis.

    PubMed

    Clark, Gillian M; Lum, Jarrad A G

    2017-10-01

    The serial reaction time task (SRTT) has been used to study procedural learning in clinical populations. In this report, second-order meta-analysis was used to investigate whether disorder type moderates performance on the SRTT. Using this approach to quantitatively summarise past research, it was tested whether autism spectrum disorder, developmental coordination disorder, dyslexia, Parkinson's disease, schizophrenia, and specific language impairment differentially affect procedural learning on the SRTT. The main analysis revealed disorder type moderated SRTT performance (p=0.010). This report demonstrates comparable levels of procedural learning impairment in developmental coordination disorder, dyslexia, Parkinson's disease, schizophrenia, and specific language impairment. However, in autism, procedural learning is spared. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Symmetry and singularity properties of second-order ordinary differential equations of Lie's type III

    NASA Astrophysics Data System (ADS)

    Andriopoulos, K.; Leach, P. G. L.

    2007-04-01

    We extend the work of Abraham-Shrauner [B. Abraham-Shrauner, Hidden symmetries and linearization of the modified Painleve-Ince equation, J. Math. Phys. 34 (1993) 4809-4816] on the linearization of the modified Painleve-Ince equation to a wider class of nonlinear second-order ordinary differential equations invariant under the symmetries of time translation and self-similarity. In the process we demonstrate a remarkable connection with the parameters obtained in the singularity analysis of this class of equations.

  19. Dual sensitivity mode system for monitoring processes and sensors

    DOEpatents

    Wilks, Alan D.; Wegerich, Stephan W.; Gross, Kenneth C.

    2000-01-01

    A method and system for analyzing a source of data. The system and method involves initially training a system using a selected data signal, calculating at least two levels of sensitivity using a pattern recognition methodology, activating a first mode of alarm sensitivity to monitor the data source, activating a second mode of alarm sensitivity to monitor the data source and generating a first alarm signal upon the first mode of sensitivity detecting an alarm condition and a second alarm signal upon the second mode of sensitivity detecting an associated alarm condition. The first alarm condition and second alarm condition can be acted upon by an operator and/or analyzed by a specialist or computer program.

  20. Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Hahn, Inseob; Ho Eom, Byeong

    2009-01-01

    A new gradiometer scheme uses middle loops as sensing elements in lowfield superconducting quantum interference device (SQUID) magnetic resonance imaging (MRI). This design of a second order gradiometer increases its sensitivity and makes it more uniform, compared to the conventional side loop sensing scheme with a comparable matching SQUID. The space between the two middle loops becomes the imaging volume with the enclosing cryostat built accordingly.

  1. Lexical prosody beyond first-language boundary: Chinese lexical tone sensitivity predicts English reading comprehension.

    PubMed

    Choi, William; Tong, Xiuli; Cain, Kate

    2016-08-01

    This 1-year longitudinal study examined the role of Cantonese lexical tone sensitivity in predicting English reading comprehension and the pathways underlying their relation. Multiple measures of Cantonese lexical tone sensitivity, English lexical stress sensitivity, Cantonese segmental phonological awareness, general auditory sensitivity, English word reading, and English reading comprehension were administered to 133 Cantonese-English unbalanced bilingual second graders. Structural equation modeling analysis identified transfer of Cantonese lexical tone sensitivity to English reading comprehension. This transfer was realized through a direct pathway via English stress sensitivity and also an indirect pathway via English word reading. These results suggest that prosodic sensitivity is an important factor influencing English reading comprehension and that it needs to be incorporated into theoretical accounts of reading comprehension across languages. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Design study of beam position monitors for measuring second-order moments of charged particle beams

    NASA Astrophysics Data System (ADS)

    Yanagida, Kenichi; Suzuki, Shinsuke; Hanaki, Hirofumi

    2012-01-01

    This paper presents a theoretical investigation on the multipole moments of charged particle beams in two-dimensional polar coordinates. The theoretical description of multipole moments is based on a single-particle system that is expanded to a multiparticle system by superposition, i.e., summing over all single-particle results. This paper also presents an analysis and design method for a beam position monitor (BPM) that detects higher-order (multipole) moments of a charged particle beam. To calculate the electric fields, a numerical analysis based on the finite difference method was created and carried out. Validity of the numerical analysis was proven by comparing the numerical with the analytical results for a BPM with circular cross section. Six-electrode BPMs with circular and elliptical cross sections were designed for the SPring-8 linac. The results of the numerical calculations show that the second-order moment can be detected for beam sizes ≧420μm (circular) and ≧550μm (elliptical).

  3. Higher order aberrations and relative risk of symptoms after LASIK.

    PubMed

    Sharma, Munish; Wachler, Brian S Boxer; Chan, Colin C K

    2007-03-01

    To understand what level of higher order aberrations increases the relative risk of visual symptoms in patients after myopic LASIK. This study was a retrospective comparative analysis of 103 eyes of 62 patients divided in two groups, matched for age, gender, pupil size, and spherical equivalent refraction. The symptomatic group comprised 36 eyes of 24 patients after conventional LASIK with different laser systems evaluated in our referral clinic and the asymptomatic control group consisted of 67 eyes of 38 patients following LADARVision CustomCornea wavefront LASIK. Comparative analysis was performed for uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity (BSCVA), contrast sensitivity, refractive cylinder, and higher order aberrations. Wavefront analysis was performed with the LADARWave aberrometer at 6.5-mm analysis for all eyes. Blurring of vision was the most common symptom (41.6%) followed by double image (19.4%), halo (16.7%), and fluctuation in vision (13.9%) in symptomatic patients. A statistically significant difference was noted in UCVA (P = .001), BSCVA (P = .001), contrast sensitivity (P < .001), and manifest cylinder (P = .001) in the two groups. The percentage difference between the symptomatic and control group mean root-mean-square (RMS) values ranged from 157% to 206% or 1.57 to 2.06 times greater. Patients with visual symptoms after LASIK have significantly lower visual acuity and contrast sensitivity and higher mean RMS values for higher order aberrations than patients without symptoms. Root-mean-square values of greater than two times the normal after-LASIK population for any given laser platform may increase the relative risk of symptoms.

  4. Health e-mavens: identifying active online health information users.

    PubMed

    Sun, Ye; Liu, Miao; Krakow, Melinda

    2016-10-01

    Given the rapid increase of Internet use for effective health communication, it is important for health practitioners to be able to identify and mobilize active users of online health information across various web-based health intervention programmes. We propose the concept 'health e-mavens' to characterize individuals actively engaged in online health information seeking and sharing activities. This study aimed to address three goals: (i) to test the factor structure of health e-mavenism, (ii) to assess the reliability and validity of this construct and (iii) to determine what predictors are associated with health e-mavenism. This study was a secondary analysis of nationally representative data from the 2010 Health Tracking Survey. We assessed the factor structure of health e-mavenism using confirmatory factor analysis and examined socio-demographic variables, health-related factors and use of technology as potential predictors of health e-mavenism through ordered regression analysis. Confirmatory factor analyses showed that a second-order two-factor structure best captured the health e-maven construct. Health e-mavenism comprised two second-order factors, each encompassing two first-order dimensions: information acquisition (consisting of information tracking and consulting) and information transmission (consisting of information posting and sharing). Both first-order and second-order factors exhibited good reliabilities. Several factors were found to be significant predictors of health e-mavenism. This study offers a starting point for further inquiries about health e-mavens. It is a fruitful construct for health promotion research in the age of new media technologies. We conclude with specific recommendations to further develop the health e-maven concept through continued empirical research. © 2015 The Authors. Health Expectations. Published by John Wiley & Sons Ltd.

  5. JWST ISIM Distortion Analysis Challenge

    NASA Technical Reports Server (NTRS)

    Cifie, Emmanuel; Matzinger, Liz; Kuhn, Jonathan; Fan, Terry

    2004-01-01

    Very tight distortion requirements are imposed on the JWST's ISM structure due to the sensitivity of the telescope's mirror segment and science instrument positioning. The ISIM structure is a three dimensional truss with asymmetric gusseting and metal fittings. One of the primary challenges for ISIM's analysis team is predicting the thermal distortion of the structure both from the bulk cooldown from ambient to cryo, and the smaller temperature changes within the cryogenic operating environment. As a first cut to estimate thermal distortions, a finite element model of bar elements was created. Elements representing joint areas and metal fittings use effective properties that match the behavior of the stack-up of the composite tube, gusset and adhesive under mechanical and thermal loads. These properties were derived by matching tip deflections of a solid model simplified T-joint. Because of the structure s asymmetric gusseting, this effective property model is a first attempt at predicting rotations that cannot be captured with a smeared CTE approach. In addition to the finite element analysis, several first order calculations have been performed to gauge the feasibility of the material design. Because of the stringent thermal distortion requirements at cryogenic temperatures, a composite tube material with near zero or negative CTE is required. A preliminary hand analysis of the contribution of the various components along the distortion path between FGS and the other instruments, neglecting second order effects were examined. A plot of bounding tube longitudinal and transverse CTEs for thermal stability requirements was generated to help determine the feasibility of meeting these requirements. This analysis is a work in progress en route to a large degree of freedom hi-fidelity FEA model for distortion analysis. Methods of model reduction, such as superelements, are currently being investigated.

  6. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  7. Discrete effect on the halfway bounce-back boundary condition of multiple-relaxation-time lattice Boltzmann model for convection-diffusion equations.

    PubMed

    Cui, Shuqi; Hong, Ning; Shi, Baochang; Chai, Zhenhua

    2016-04-01

    In this paper, we will focus on the multiple-relaxation-time (MRT) lattice Boltzmann model for two-dimensional convection-diffusion equations (CDEs), and analyze the discrete effect on the halfway bounce-back (HBB) boundary condition (or sometimes called bounce-back boundary condition) of the MRT model where three different discrete velocity models are considered. We first present a theoretical analysis on the discrete effect of the HBB boundary condition for the simple problems with a parabolic distribution in the x or y direction, and a numerical slip proportional to the second-order of lattice spacing is observed at the boundary, which means that the MRT model has a second-order convergence rate in space. The theoretical analysis also shows that the numerical slip can be eliminated in the MRT model through tuning the free relaxation parameter corresponding to the second-order moment, while it cannot be removed in the single-relaxation-time model or the Bhatnagar-Gross-Krook model unless the relaxation parameter related to the diffusion coefficient is set to be a special value. We then perform some simulations to confirm our theoretical results, and find that the numerical results are consistent with our theoretical analysis. Finally, we would also like to point out the present analysis can be extended to other boundary conditions of lattice Boltzmann models for CDEs.

  8. Bias of phencyclidine discrimination by the schedule of reinforcement.

    PubMed Central

    McMillan, D E; Wenger, G R

    1984-01-01

    Pigeons, trained to discriminate phencyclidine from saline under a procedure requiring the bird to track the location of a color, received cumulative doses of phencyclidine, pentobarbital, or d-amphetamine with a variety of schedules of reinforcement in effect (across phases). When the same second-order schedules were used to reinforce responding after either saline or phencyclidine administration, stimulus control by phencyclidine did not depend on the schedule parameter. When different second-order schedules were used that biased responding toward the phencyclidine-correlated key color, pigeons responded on the phencyclidine-correlated key at lower doses of phencyclidine and pentobarbital than when the second-order schedule biased responding toward the saline key color. A similar but less marked effect was obtained with d-amphetamine. Attempts to produce bias by changing reinforcement magnitude (duration of food availability) were less successful. A signal-detection analysis of dose-effect curves for phencyclidine under two of the second-order schedules employed suggested that at low doses of phencyclidine, response bias is a major determinant of responding. As doses were increased, position preferences occurred and response bias decreased; at higher doses both response bias and position preference decreased and discriminability increased. With low doses of pentobarbital, responding again was biased but increasing doses produced position preference with only small increases in discriminability. At low doses of d-amphetamine responding also was biased, but bias did not decrease consistently with dose nor did discriminability increase. These experiments suggest that the schedule of reinforcement can be used to bias responding toward or away from making the drug-correlated response in drug discrimination experiments, and that signal-detection analysis and analysis of responding at a position can be used to separate the discriminability of the drug state from other effects of the drug on responding. PMID:6481300

  9. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. High-speed stimulated Raman scattering microscopy for studying the metabolic diversity of motile Euglena gracilis

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.; Wakisaka, Y.; Iwata, O.; Nakashima, A.; Ito, T.; Hirose, M.; Domon, R.; Sugawara, M.; Tsumura, N.; Watarai, H.; Shimobaba, T.; Suzuki, K.; Goda, K.; Ozeki, Y.

    2017-02-01

    Microalgae have been receiving great attention for their ability to produce biomaterials that are applicable for food supplements, drugs, biodegradable plastics, and biofuels. Among such microalgae, Euglena gracilis has become a popular species by virtue of its capability of accumulating useful metabolites including paramylon and lipids. In order to maximize the production of desired metabolites, it is essential to find ideal culturing conditions and to develop efficient methods for genetic transformation. To achieve this, understanding and controlling cell-to-cell variations in response to external stress is essential, with chemically specific analysis of microalgal cells including E. gracilis. However, conventional analytical tools such as fluorescence microscopy and spontaneous Raman scattering are not suitable for evaluation of diverse populations of motile microalgae, being restricted either by the requirement for fluorescent labels or a limited imaging speed, respectively. Here we demonstrate video-rate label-free metabolite imaging of live E. gracilis using stimulated Raman scattering (SRS) - an optical spectroscopic method for probing the vibrational signatures of molecules with orders of magnitude higher sensitivity than spontaneous Raman scattering. Our SRS's highspeed image acquisition (27 metabolite images per second) allows for population analysis of live E. gracilis cells cultured under nitrogen-deficiency - a technique for promoting the accumulation of paramylon and lipids within the cell body. Thus, our SRS system's fast imaging capability enables quantification and analysis of previously unresolvable cell-to-cell variations in the metabolite accumulation of large motile E. gracilis cell populations.

  11. Proton therapy versus intensity modulated x-ray therapy in the treatment of prostate cancer: Estimating secondary cancer risks

    NASA Astrophysics Data System (ADS)

    Fontenot, Jonas David

    External beam radiation therapy is used to treat nearly half of the more than 200,000 new cases of prostate cancer diagnosed in the United States each year. During a radiation therapy treatment, healthy tissues in the path of the therapeutic beam are exposed to high doses. In addition, the whole body is exposed to a low-dose bath of unwanted scatter radiation from the pelvis and leakage radiation from the treatment unit. As a result, survivors of radiation therapy for prostate cancer face an elevated risk of developing a radiogenic second cancer. Recently, proton therapy has been shown to reduce the dose delivered by the therapeutic beam to normal tissues during treatment compared to intensity modulated x-ray therapy (IMXT, the current standard of care). However, the magnitude of stray radiation doses from proton therapy, and their impact on this incidence of radiogenic second cancers, was not known. The risk of a radiogenic second cancer following proton therapy for prostate cancer relative to IMXT was determined for 3 patients of large, median, and small anatomical stature. Doses delivered to healthy tissues from the therapeutic beam were obtained from treatment planning system calculations. Stray doses from IMXT were taken from the literature, while stray doses from proton therapy were simulated using a Monte Carlo model of a passive scattering treatment unit and an anthropomorphic phantom. Baseline risk models were taken from the Biological Effects of Ionizing Radiation VII report. A sensitivity analysis was conducted to characterize the uncertainty of risk calculations to uncertainties in the risk model, the relative biological effectiveness (RBE) of neutrons for carcinogenesis, and inter-patient anatomical variations. The risk projections revealed that proton therapy carries a lower risk for radiogenic second cancer incidence following prostate irradiation compared to IMXT. The sensitivity analysis revealed that the results of the risk analysis depended only weakly on uncertainties in the risk model and inter-patient variations. Second cancer risks were sensitive to changes in the RBE of neutrons. However, the findings of the study were qualitatively consistent for all patient sizes and risk models considered, and for all neutron RBE values less than 100.

  12. Sensitivity of charge transport measurements to local inhomogeneities

    NASA Astrophysics Data System (ADS)

    Koon, Daniel; Wang, Fei; Hjorth Petersen, Dirch; Hansen, Ole

    2012-02-01

    We derive analytic expressions for the sensitivity of resistive and Hall measurements to local variations in a specimen's material properties in the combined linear limit of both small magnetic fields and small perturbations, presenting exact, algebraic expressions both for four-point probe measurements on an infinite plane and for symmetric, circular van der Pauw discs. We then generalize the results to obtain corrections to the sensitivities both for finite magnetic fields and for finite perturbations. Calculated functions match published results and computer simulations, and provide an intuitive, visual explanation for experimental misassignment of carrier type in n-type ZnO and agree with published experimental results for holes in a uniform material. These results simplify calculation and plotting of the sensitivities on an NxN grid from a problem of order N^5 to one of order N^3 in the arbitrary case and of order N^2 in the handful of cases that can be solved exactly, putting a powerful tool for inhomogeneity analysis in the hands of the researcher: calculation of the sensitivities requires little more than the solution of Laplace's equation on the specimen geometry.

  13. Interference-free spectrofluorometric quantification of aristolochic acid I and aristololactam I in five Chinese herbal medicines using chemical derivatization enhancement and second-order calibration methods

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Wu, Hai-Long; Yin, Xiao-Li; Gu, Hui-Wen; Xiao, Rong; Wang, Li; Fang, Huan; Yu, Ru-Qin

    2017-03-01

    A rapid interference-free spectrofluorometric method combined with the excitation-emission matrix fluorescence and the second-order calibration methods based on the alternating penalty trilinear decomposition (APTLD) and the self-weighted alternating trilinear decomposition (SWATLD) algorithms, was proposed for the simultaneous determination of nephrotoxic aristolochic acid I (AA-I) and aristololactam I (AL-I) in five Chinese herbal medicines. The method was based on a chemical derivatization that converts the non-fluorescent AA-I to high-fluorescent AL-I, achieving a high sensitive and simultaneous quantification of the analytes. The variables of the derivatization reaction that conducted by using zinc powder in acetose methanol aqueous solution, were studied and optimized for best quantification results of AA-I and AL-I. The satisfactory results of AA-I and AL-I for the spiked recovery assay were achieved with average recoveries in the range of 100.4-103.8% and RMSEPs < 0.78 ng mL- 1, which validate the accuracy and reliability of the proposed method. The contents of AA-I and AL-I in five herbal medicines obtained from the proposed method were also in good accordance with those of the validated LC-MS/MS method. In light of high sensitive fluorescence detection, the limits of detection (LODs) of AA-I and AL-I for the proposed method compare favorably with that of the LC-MS/MS method, with the LODs < 0.35 and 0.29 ng mL- 1, respectively. The proposed strategy based on the APTLD and SWATLD algorithms by virtue of the "second-order advantage", can be considered as an attractive and green alternative for the quantification of AA-I and AL-I in complex herbal medicine matrices without any prior separations and clear-up processes.

  14. Energy dependence corrections to MOSFET dosimetric sensitivity.

    PubMed

    Cheung, T; Butson, M J; Yu, P K N

    2009-03-01

    Metal Oxide Semiconductor Field Effect Transistors (MOSFET's) are dosimeters which are now frequently utilized in radiotherapy treatment applications. An improved MOSFET, clinical semiconductor dosimetry system (CSDS) which utilizes improved packaging for the MOSFET device has been studied for energy dependence of sensitivity to x-ray radiation measurement. Energy dependence from 50 kVp to 10 MV x-rays has been studied and found to vary by up to a factor of 3.2 with 75 kVp producing the highest sensitivity response. The detectors average life span in high sensitivity mode is energy related and ranges from approximately 100 Gy for 75 kVp x-rays to approximately 300 Gy at 6 MV x-ray energy. The MOSFET detector has also been studied for sensitivity variations with integrated dose history. It was found to become less sensitive to radiation with age and the magnitude of this effect is dependant on radiation energy with lower energies producing a larger sensitivity reduction with integrated dose. The reduction in sensitivity is however approximated reproducibly by a slightly non linear, second order polynomial function allowing corrections to be made to readings to account for this effect to provide more accurate dose assessments both in phantom and in-vivo.

  15. Using Birth Order and Sibling Dynamics in Career Counseling.

    ERIC Educational Resources Information Center

    Bradley, Richard W.

    1982-01-01

    Discusses birth order as a determinant of occupational roles. Considers the first- and second-born experience in relation to occupational status and choice. Explores sibling dynamics and the need for striving for significance in the family. Discusses identification and analysis of family constellations. (RC)

  16. Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).

    PubMed

    Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young

    2016-04-01

    Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.

  17. Financial analysis of cardiovascular wellness program provided to self-insured company from pharmaceutical care provider's perspective.

    PubMed

    Wilson, Justin B; Osterhaus, Matt C; Farris, Karen B; Doucette, William R; Currie, Jay D; Bullock, Tammy; Kumbera, Patty

    2005-01-01

    To perform a retrospective financial analysis on the implementation of a self-insured company's wellness program from the pharmaceutical care provider's perspective and conduct sensitivity analyses to estimate costs versus revenues for pharmacies without resident pharmacists, program implementation for a second employer, the second year of the program, and a range of pharmacist wages. Cost-benefit and sensitivity analyses. Self-insured employer with headquarters in Canton, N.C. 36 employees at facility in Clinton, Iowa. Pharmacist-provided cardiovascular wellness program. Costs and revenues collected from pharmacy records, including pharmacy purchasing records, billing records, and pharmacists' time estimates. All costs and revenues were calculated for the development and first year of the intervention program. Costs included initial and follow-up screening supplies, office supplies, screening/group presentation time, service provision time, documentation/preparation time, travel expenses, claims submission time, and administrative fees. Revenues included initial screening revenues, follow-up screening revenues, group session revenues, and Heart Smart program revenues. For the development and first year of Heart Smart, net benefit to the pharmacy (revenues minus costs) amounted to dollars 2,413. All sensitivity analyses showed a net benefit. For pharmacies without a resident pharmacist, the net benefit was dollars 106; for Heart Smart in a second employer, the net benefit was dollars 6,024; for the second year, the projected net benefit was dollars 6,844; factoring in a lower pharmacist salary, the net benefit was dollars 2,905; and for a higher pharmacist salary, the net benefit was dollars 1,265. For the development and first year of Heart Smart, the revenues of the wellness program in a self-insured company outweighed the costs.

  18. Theoretical study on second-harmonic generation of focused vortex beams

    NASA Astrophysics Data System (ADS)

    Tang, Daolong; Wang, Jing; Ma, Jingui; Zhou, Bingjie; Yuan, Peng; Xie, Guoqiang; Zhu, Heyuan; Qian, Liejia

    2018-03-01

    Second-harmonic generation (SHG) provides a promising route for generating vortex beams of both short wavelength and large topological charge. Here we theoretically investigate the efficiency optimization and beam characteristics of focused vortex-beam SHG. Owing to the increasing beam divergence, vortex beams have distinct features in SHG optimization compared with a Gaussian beam. We show that, under the noncritical phase-matching condition, the Boyd and Kleinman prediction of the optimal focusing parameter for Gaussian-beam SHG remains valid for vortex-beam SHG. However, under the critical phase-matching condition, which is sensitive to the beam divergence, the Boyd and Kleinman prediction is no longer valid. In contrast, the optimal focusing parameter for maximizing the SHG efficiency strongly depends on the vortex order. We also investigate the effects of focusing and phase-matching conditions on the second-harmonic beam characteristics.

  19. Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel

    CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less

  20. Validation of voxel-based morphometry (VBM) based on MRI

    NASA Astrophysics Data System (ADS)

    Yang, Xueyu; Chen, Kewei; Guo, Xiaojuan; Yao, Li

    2007-03-01

    Voxel-based morphometry (VBM) is an automated and objective image analysis technique for detecting differences in regional concentration or volume of brain tissue composition based on structural magnetic resonance (MR) images. VBM has been used widely to evaluate brain morphometric differences between different populations, but there isn't an evaluation system for its validation until now. In this study, a quantitative and objective evaluation system was established in order to assess VBM performance. We recruited twenty normal volunteers (10 males and 10 females, age range 20-26 years, mean age 22.6 years). Firstly, several focal lesions (hippocampus, frontal lobe, anterior cingulate, back of hippocampus, back of anterior cingulate) were simulated in selected brain regions using real MRI data. Secondly, optimized VBM was performed to detect structural differences between groups. Thirdly, one-way ANOVA and post-hoc test were used to assess the accuracy and sensitivity of VBM analysis. The results revealed that VBM was a good detective tool in majority of brain regions, even in controversial brain region such as hippocampus in VBM study. Generally speaking, much more severity of focal lesion was, better VBM performance was. However size of focal lesion had little effects on VBM analysis.

  1. Computational Identification and Analysis of the Key Biosorbent Characteristics for the Biosorption Process of Reactive Black 5 onto Fungal Biomass

    PubMed Central

    Yang, Yu-Yi; Li, Ze-Li; Wang, Guan; Zhao, Xiao-Ping; Crowley, David E.; Zhao, Yu-Hua

    2012-01-01

    The performances of nine biosorbents derived from dead fungal biomass were investigated for their ability to remove Reactive Black 5 from aqueous solution. The biosorption data for removal of Reactive Black 5 were readily modeled using the Langmuir adsorption isotherm. Kinetic analysis based on both pseudo-second-order and Weber-Morris models indicated intraparticle diffusion was the rate limiting step for biosorption of Reactive Black 5 on to the biosorbents. Sorption capacities of the biosorbents were not correlated with the initial biosorption rates. Sensitivity analysis of the factors affecting biosorption examined by an artificial neural network model showed that pH was the most important parameter, explaining 22%, followed by nitrogen content of biosorbents (16%), initial dye concentration (15%) and carbon content of biosorbents (10%). The biosorption capacities were not proportional to surface areas of the sorbents, but were instead influenced by their chemical element composition. The main functional groups contributing to dye sorption were amine, carboxylic, and alcohol moieties. The data further suggest that differences in carbon and nitrogen contents of biosorbents may be used as a selection index for identifying effective biosorbents from dead fungal biomass. PMID:22442697

  2. Spectroscopic, DFT and Z-scan supported investigation of dicyanoisophorone based push-pull NLOphoric styryl dyes

    NASA Astrophysics Data System (ADS)

    Erande, Yogesh; Sreenath, Mavila C.; Chitrambalam, Subramaniyan; Joe, Isaac H.; Sekar, Nagaiyan

    2017-04-01

    The dicyanoisophorone acceptor based NLOphores with Intramolecular Charge Transfer (ICT) character are newly synthesised, characterised and explored for linear and non linear optical (NLO) property investigation. Strong ICT character of these D-π-A styryl NLOphores is established with support of emission solvatochromism, polarity functions and Generalised Mulliken Hush (GMH) analysis. First, second and third order polarizability of these NLOphores is investigated by spectroscopic and TDDFT computational approach using CAM/B3LYP-6-311 + g (d, p) method. BLA and BOA values of these chromophores are evaluated from ground and excited state optimized geometries and found that the respective structures are approaching towards cyanine limit. Third order nonlinear susceptibility (X(3)) along with nonlinear absorption coefficient (β) and nonlinear refraction (n2) are evaluated for these NLOphores using Z-scan experiment. All four chromophores exhibit large polarization anisotropy (Δα), first order hyperpolarizability (β0), second order hyperpolarizability (γ) and third order nonlinear susceptibility (X(3)). TGA analysis proved these NLOphores are stable up to 320 °C and hence can be used in device fabrication.

  3. Investigation of magnetic and magneto-transport properties of ferromagnetic-charge ordered core-shell nanostructures

    NASA Astrophysics Data System (ADS)

    Das, Kalipada

    2017-10-01

    In our present study, we address in detail the magnetic and magneto-transport properties of ferromagnetic-charge ordered core-shell nanostructures. In these core-shell nanostructures, well-known half metallic La0.67Sr0.33MnO3 nanoparticles (average particle size, ˜20 nm) are wrapped by the charge ordered antiferromagnetic Pr0.67Ca0.33MnO3 (PCMO) matrix. The intrinsic properties of PCMO markedly modify it into such a core-shell form. The robustness of the PCMO matrix becomes fragile and melts at an external magnetic field (H) of ˜20 kOe. The analysis of magneto-transport data indicates the systematic reduction of the electron-electron and electron-magnon interactions in the presence of an external magnetic field in these nanostructures. The pronounced training effect appears in this phase separated compound, which was analyzed by considering the second order tunneling through the grain boundaries of the nanostructures. Additionally, the analysis of low field magnetoconductance data supports the second order tunneling and shows the close value of the universal limit (˜1.33).

  4. Electromagnetic field analysis and modeling of a relative position detection sensor for high speed maglev trains.

    PubMed

    Xue, Song; He, Ning; Long, Zhiqiang

    2012-01-01

    The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor.

  5. Electromagnetic Field Analysis and Modeling of a Relative Position Detection Sensor for High Speed Maglev Trains

    PubMed Central

    Xue, Song; He, Ning; Long, Zhiqiang

    2012-01-01

    The long stator track for high speed maglev trains has a tooth-slot structure. The sensor obtains precise relative position information for the traction system by detecting the long stator tooth-slot structure based on nondestructive detection technology. The magnetic field modeling of the sensor is a typical three-dimensional (3-D) electromagnetic problem with complex boundary conditions, and is studied semi-analytically in this paper. A second-order vector potential (SOVP) is introduced to simplify the vector field problem to a scalar field one, the solution of which can be expressed in terms of series expansions according to Multipole Theory (MT) and the New Equivalent Source (NES) method. The coefficients of the expansions are determined by the least squares method based on the boundary conditions. Then, the solution is compared to the simulation result through Finite Element Analysis (FEA). The comparison results show that the semi-analytical solution agrees approximately with the numerical solution. Finally, based on electromagnetic modeling, a difference coil structure is designed to improve the sensitivity and accuracy of the sensor. PMID:22778652

  6. A Comprehensive Evaluation of a Two-Channel Portable Monitor to “Rule in” Obstructive Sleep Apnea

    PubMed Central

    Ward, Kim L.; McArdle, Nigel; James, Alan; Bremner, Alexandra P.; Simpson, Laila; Cooper, Matthew N.; Palmer, Lyle J.; Fedson, Annette C.; Mukherjee, Sutapa; Hillman, David R.

    2015-01-01

    Study Objectives: We hypothesized that a dual-channel portable monitor (PM) device could accurately identify patients who have a high pretest probability of obstructive sleep apnea (OSA), and we evaluated factors that may contribute to variability between PM and polysomnography (PSG) results. Methods: Consecutive clinic patients (N = 104) with possible OSA completed a home PM study, a PM study simultaneous with laboratory PSG, and a second home PM study. Uniform data analysis methods were applied to both PM and PSG data. Primary outcomes of interest were the positive likelihood ratio (LR+) and sensitivity of the PM device to “rule-in” OSA, defined as an apnea-hypopnea index (AHI) ≥ 5 events/h on PSG. Effects of different test environment and study nights, and order of study and analysis methods (manual compared to automated) on PM diagnostic accuracy were assessed. Results: The PM has adequate LR+ (4.8), sensitivity (80%), and specificity (83%) for detecting OSA in the unattended home setting when benchmarked against laboratory PSG, with better LR+ (> 5) and specificity (100%) and unchanged sensitivity (80%) in the simultaneous laboratory comparison. There were no significant night-night (all p > 0.10) or study order effects (home or laboratory first, p = 0.08) on AHI measures. Manual PM data review improved case finding accuracy, although this was not statistically significant (all p > 0.07). Misclassification was more frequent where OSA was mild. Conclusions: Overall performance of the PM device is consistent with current recommended criteria for an “acceptable” device to confidently “rule-in” OSA (AHI ≥ 5 events/h) in a high pretest probability clinic population. Our data support the utility of simple two-channel diagnostic devices to confirm the diagnosis of OSA in the home environment. Commentary: A commentary on this article appears in this issue on page 411. Citation: Ward KL, McArdle N, James A, Bremner AP, Simpson L, Cooper MN, Palmer LJ, Fedson AC, Mukherjee S, Hillman DR. A comprehensive evaluation of a two-channel portable monitor to “rule in” obstructive sleep apnea. J Clin Sleep Med 2015;11(4):433–444. PMID:25580606

  7. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  8. Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool

    NASA Astrophysics Data System (ADS)

    Pernigotti, Denise; Belis, Claudio A.

    2017-04-01

    Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.

  9. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.

  10. Impact and cost-effectiveness of a second tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis (Tdap) vaccine dose to prevent pertussis in the United States.

    PubMed

    Kamiya, Hajime; Cho, Bo-Hyun; Messonnier, Mark L; Clark, Thomas A; Liang, Jennifer L

    2016-04-04

    The United States experienced a substantial increase in reported pertussis cases over the last decade. Since 2005, persons 11 years and older have been routinely recommended to receive a single dose of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) vaccine. The objective of this analysis was to evaluate the potential impact and cost-effectiveness of recommending a second dose of Tdap. A static cohort model was used to calculate the epidemiologic and economic impact of adding a second dose of Tdap at age 16 or 21 years. Projected costs and outcomes were examined from a societal perspective over a 20-year period. Quality-adjusted Life Years (QALY) saved were calculated. Using baseline pertussis incidence from the National Notifiable Diseases Surveillance System, Tdap revaccination at either age 16 or 21 years would reduce outpatient visits by 433 (5%) and 285 (4%), and hospitalization cases by 7 (7%) and 5 (5%), respectively. The costs per QALY saved with a second dose of Tdap were approximately US $19.7 million (16 years) and $26.2 million (21 years). In sensitivity analyses, incidence most influenced the model; as incidence increased, the costs per QALY decreased. To a lesser degree, initial vaccine effectiveness and waning of effectiveness also affected cost outcomes. Multivariate sensitivity analyses showed that under a set of optimistic assumptions, the cost per QALY saved would be approximately $163,361 (16 years) and $204,556 (21 years). A second dose of Tdap resulted in a slight decrease in the number of cases and other outcomes, and that trend is more apparent when revaccinating at age 16 years than at age 21 years. Both revaccination strategies had high dollar per QALY saved even under optimistic assumptions in a multivariate sensitivity analysis. Published by Elsevier Ltd.

  11. Diagnostic and prognostic value of baseline FDG PET/CT skeletal textural features in diffuse large B cell lymphoma.

    PubMed

    Aide, Nicolas; Talbot, Marjolaine; Fruchart, Christophe; Damaj, Gandhi; Lasnon, Charline

    2018-05-01

    Our purpose was to evaluate the diagnostic and prognostic value of skeletal textural features (TFs) on baseline FDG PET in diffuse large B cell lymphoma (DLBCL) patients. Eighty-two patients with DLBCL who underwent a bone marrow biopsy (BMB) and a PET scan between December 2008 and December 2015 were included. Two readers blinded to the BMB results visually assessed PET images for bone marrow involvement (BMI) in consensus, and a third observer drew a volume of interest (VOI) encompassing the axial skeleton and the pelvis, which was used to assess skeletal TFs. ROC analysis was used to determine the best TF able to diagnose BMI among four first-order, six second-order and 11 third-order metrics, which was then compared for diagnosis and prognosis in disease-free patients (BMB-/PET-) versus patients considered to have BMI (BMB+/PET-, BMB-/PET+, and BMB+/PET+). Twenty-two out of 82 patients (26.8%) had BMI: 13 BMB-/PET+, eight BMB+/PET+ and one BMB+/PET-. Among the nine BMB+ patients, one had discordant BMI identified by both visual and TF PET assessment. ROC analysis showed that SkewnessH, a first-order metric, was the best parameter for identifying BMI with sensitivity and specificity of 81.8% and 81.7%, respectively. SkewnessH demonstrated better discriminative power over BMB and PET visual analysis for patient stratification: hazard ratios (HR), 3.78 (P = 0.02) versus 2.81 (P = 0.06) for overall survival (OS) and HR, 3.17 (P = 0.03) versus 1.26 (P = 0.70) for progression-free survival (PFS). In multivariate analysis accounting for IPI score, bulky status, haemoglobin and SkewnessH, the only independent predictor of OS was the IPI score, while the only independent predictor of PFS was SkewnessH. The better discriminative power of skeletal heterogeneity for risk stratification compared to BMB and PET visual analysis in the overall population, and more specifically in BMB-/PET- patients, suggests that it can be useful to identify diagnostically overlooked BMI.

  12. An Investigation of the Dynamic Response of a Seismically Stable Platform

    DTIC Science & Technology

    1982-08-01

    PAD. The controls on the -9system are of two types. A low frequency tilt control, with a 10 arc second sensitivity, 2-axis tiltmeter as sensor ...Inertial Sensors Structural Analysis Holloman AFB, NiM. Support to this effort includes structural analyses toward active servo frequency band. This report...controlled to maintain a null position of a sensitive height sensor . The 6-degree-of- freedom high frequency controls are based on seismometers as sensors

  13. Multi-Fault Detection of Rolling Element Bearings under Harsh Working Condition Using IMF-Based Adaptive Envelope Order Analysis

    PubMed Central

    Zhao, Ming; Lin, Jing; Xu, Xiaoqiang; Li, Xuejun

    2014-01-01

    When operating under harsh condition (e.g., time-varying speed and load, large shocks), the vibration signals of rolling element bearings are always manifested as low signal noise ratio, non-stationary statistical parameters, which cause difficulties for current diagnostic methods. As such, an IMF-based adaptive envelope order analysis (IMF-AEOA) is proposed for bearing fault detection under such conditions. This approach is established through combining the ensemble empirical mode decomposition (EEMD), envelope order tracking and fault sensitive analysis. In this scheme, EEMD provides an effective way to adaptively decompose the raw vibration signal into IMFs with different frequency bands. The envelope order tracking is further employed to transform the envelope of each IMF to angular domain to eliminate the spectral smearing induced by speed variation, which makes the bearing characteristic frequencies more clear and discernible in the envelope order spectrum. Finally, a fault sensitive matrix is established to select the optimal IMF containing the richest diagnostic information for final decision making. The effectiveness of IMF-AEOA is validated by simulated signal and experimental data from locomotive bearings. The result shows that IMF-AEOA could accurately identify both single and multiple faults of bearing even under time-varying rotating speed and large extraneous shocks. PMID:25353982

  14. Second-Order Analysis of Semiparametric Recurrent Event Processes

    PubMed Central

    Guan, Yongtao

    2011-01-01

    Summary A typical recurrent event dataset consists of an often large number of recurrent event processes, each of which contains multiple event times observed from an individual during a followup period. Such data have become increasingly available in medical and epidemiological studies. In this paper, we introduce novel procedures to conduct second-order analysis for a flexible class of semiparametric recurrent event processes. Such an analysis can provide useful information regarding the dependence structure within each recurrent event process. Specifically, we will use the proposed procedures to test whether the individual recurrent event processes are all Poisson processes and to suggest sensible alternative models for them if they are not. We apply these procedures to a well-known recurrent event dataset on chronic granulomatous disease and an epidemiological dataset on Meningococcal disease cases in Merseyside, UK to illustrate their practical value. PMID:21361885

  15. Background-free, high sensitivity staining of proteins in one- and two-dimensional sodium dodecyl sulfate-polyacrylamide gels using a luminescent ruthenium complex.

    PubMed

    Berggren, K; Chernokalskaya, E; Steinberg, T H; Kemper, C; Lopez, M F; Diwu, Z; Haugland, R P; Patton, W F

    2000-07-01

    SYPRO Ruby dye is a permanent stain comprised of ruthenium as part of an organic complex that interacts noncovalently with proteins. SYPRO Ruby Protein Gel Stain provides a sensitive, gentle, fluorescence-based method for detecting proteins in one-dimensional and two-dimensional sodium dodecyl sulfate-polyacrylamide gels. Proteins are fixed, stained from 3h to overnight and then rinsed in deionized water or dilute methanol/acetic acid solution for 30 min. The stain can be visualized using a wide range of excitation sources commonly used in image analysis systems including a 302 nm UV-B transilluminator, 473 nm second harmonic generation (SHG) laser, 488 nm argon-ion laser, 532 nm yttrium-aluminum-garnet (YAG) laser, xenon arc lamp, blue fluorescent light bulb or blue light-emitting diode (LED). The sensitivity of SYPRO Ruby Protein Gel Stain is superior to colloidal Coomassie Brilliant Blue (CBB) stain or monobromobimane labeling and comparable with the highest sensitivity silver or zinc-imidazole staining procedures available. The linear dynamic range of SYPRO Ruby Protein Gel stain extends over three orders of magnitude, which is vastly superior to silver, zinc-imidazole, monobromobimane and CBB stain. The fluorescent stain does not contain superfluous chemicals (formaldehyde, glutaraldehyde, Tween-20) that frequently interfere with peptide identification in mass spectrometry. While peptide mass profiles are severely altered in protein samples prelabeled with monobromobimane, successful identification of proteins by peptide mass profiling using matrix-assisted laser desorption/ionization mass spectrometry was easily performed after protein detection with SYPRO Ruby Protein Gel stain.

  16. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  17. The effect of chemical information on the spatial distribution of fruit flies: II Parameterization, calibration, and sensitivity.

    PubMed

    de Gee, Maarten; Lof, Marjolein E; Hemerik, Lia

    2008-10-01

    In a companion paper (Lof et al., in Bull. Math. Biol., 2008), we describe a spatio-temporal model for insect behavior. This model includes chemical information for finding resources and conspecifics. As a model species, we used Drosophila melanogaster, because its behavior is documented comparatively well. We divide a population of Drosophila into three states: moving, searching, and settled. Our model describes the number of flies in each state, together with the concentrations of food odor and aggregation pheromone, in time and in two spatial dimensions. Thus, the model consists of 5 spatio-temporal dependent variables, together with their constituting relations. Although we tried to use the simplest submodels for the separate variables, the parameterization of the spatial model turned out to be quite difficult, even for this well-studied species. In the first part of this paper, we discuss the relevant results from the literature, and their possible implications for the parameterization of our model. Here, we focus on three essential aspects of modeling insect behavior. First, there is the fundamental discrepancy between the (lumped) measured behavioral properties (i.e., fruit fly displacements) and the (detailed) properties of the underlying mechanisms (i.e., dispersivity, sensory perception, and state transition) that are adopted as explanation. Detailed quantitative studies on insect behavior when reacting to infochemicals are scarce. Some information on dispersal can be used, but quantitative data on the transition between the three states could not be found. Second, a dose-response relation as used in human perception research is not available for the response of the insects to infochemicals; the behavioral response relations are known mostly in a qualitative manner, and the quantitative information that is available does not depend on infochemical concentration. We show how a commonly used Michaelis-Menten type dose-response relation (incorporating a saturation effect) can be adapted to the use of two different but interrelated stimuli (food odors and aggregation pheromone). Although we use all available information for its parameterization, this model is still overparameterized. Third, the spatio-temporal dispersion of infochemicals is hard to model: Modeling turbulent dispersal on a length scale of 10 m is notoriously difficult. Moreover, we have to reduce this inherently three-dimensional physical process to two dimensions in order to fit in the two-dimensional model for the insects. We investigate the consequences of this dimension reduction, and we demonstrate that it seriously affects the parameterization of the model for the infochemicals. In the second part of this paper, we present the results of a sensitivity analysis. This sensitivity analysis can be used in two manners: firstly, it tells us how general the simulation results are if variations in the parameters are allowed, and secondly, we can use it to infer which parameters need more precise quantification than is available now. It turns out that the short term outcome of our model is most sensitive to the food odor production rate and the fruit fly dispersivity. For the other parameters, the model is quite robust. The dependence of the model outcome with respect to the qualitative model choices cannot be investigated with a parameter sensitivity analysis. We conclude by suggesting some experimental setups that may contribute to answering this question.

  18. PTT analysis of polyps from FAP patients reveals a great majority of APC truncating mutations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luijt, R.B. van der; Khan, P.M.; Tops, C.M.J.

    The adenomatous polyposis coli (APC) gene plays an important role in colorectal carcinogenesis. Germline APC mutations are associated with familial adenomatous polyposis (FAP), an autosomal dominantly inherited predisposition to colorectal cancer, characterized by the development of numerous adenomatous polyps in the large intestine. In order to investigate whether somatic inactivation of the remaining APC allele is necessary for adenoma formation, we collected multiple adenomatous polyps from individual FAP patients and investigated the presence of somatic mutations in the APC gene. The analysis of somatic APC mutations in these tumor samples was performed using a rapid and sensitive assay, called themore » protein truncation test (PTT). Chain-terminating somatic APC mutations were detected in the great majority of the tumor samples investigated. As expected, these mutations were mainly located in the mutation cluster region (MCR) in exon 15. Our results confirm that somatic mutation of the second APC allele is required for adenoma formation in FAP. Interestingly, in the polyps investigated in our study, the second APC allele is somatically inactivated through point mutation leading to a stop codon rather than by loss of heterozygosity. The observation that somatic second hits in APC are required for tumor development in FAP is in apparent accordance with the Knudson hypothesis for classical tumor suppressor genes. However, it is yet unknown whether chain-terminating APC mutations lead to a truncated protein exerting a dominant-negative effect or whether these mutations result in a null allele. Further investigation of this important issue will hopefully provide a better understanding of the mechanism of action of the mutated APC alleles in colorectal carcinogenesis.« less

  19. Biomechanical Measures During Landing and Postural Stability Predict Second Anterior Cruciate Ligament Injury After Anterior Cruciate Ligament Reconstruction and Return to Sport

    PubMed Central

    Paterno, Mark V.; Schmitt, Laura C.; Ford, Kevin R.; Rauh, Mitchell J.; Myer, Gregory D.; Huang, Bin; Hewett, Timothy E.

    2016-01-01

    Background Athletes who return to sport participation after anterior cruciate ligament reconstruction (ACLR) have a higher risk of a second anterior cruciate ligament injury (either reinjury or contralateral injury) compared with non–anterior cruciate ligament–injured athletes. Hypotheses Prospective measures of neuromuscular control and postural stability after ACLR will predict relative increased risk for a second anterior cruciate ligament injury. Study Design Cohort study (prognosis); Level of evidence, 2. Methods Fifty-six athletes underwent a prospective biomechanical screening after ACLR using 3-dimensional motion analysis during a drop vertical jump maneuver and postural stability assessment before return to pivoting and cutting sports. After the initial test session, each subject was followed for 12 months for occurrence of a second anterior cruciate ligament injury. Lower extremity joint kinematics, kinetics, and postural stability were assessed and analyzed. Analysis of variance and logistic regression were used to identify predictors of a second anterior cruciate ligament injury. Results Thirteen athletes suffered a subsequent second anterior cruciate ligament injury. Transverse plane hip kinetics and frontal plane knee kinematics during landing, sagittal plane knee moments at landing, and deficits in postural stability predicted a second injury in this population (C statistic = 0.94) with excellent sensitivity (0.92) and specificity (0.88). Specific predictive parameters included an increase in total frontal plane (valgus) movement, greater asymmetry in internal knee extensor moment at initial contact, and a deficit in single-leg postural stability of the involved limb, as measured by the Biodex stability system. Hip rotation moment independently predicted second anterior cruciate ligament injury (C = 0.81) with high sensitivity (0.77) and specificity (0.81). Conclusion Altered neuromuscular control of the hip and knee during a dynamic landing task and postural stability deficits after ACLR are predictors of a second anterior cruciate ligament injury after an athlete is released to return to sport. PMID:20702858

  20. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

Top