Sample records for equivalent estimation second-order

  1. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  2. Comparing Fit and Reliability Estimates of a Psychological Instrument Using Second-Order CFA, Bifactor, and Essentially Tau-Equivalent (Coefficient Alpha) Models via AMOS 22

    ERIC Educational Resources Information Center

    Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey

    2015-01-01

    Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…

  3. The evaluation of the neutron dose equivalent in the two-bend maze.

    PubMed

    Tóth, Á Á; Petrović, B; Jovančević, N; Krmar, M; Rutonjski, L; Čudić, O

    2017-04-01

    The purpose of this study was to explore the effect of the second bend of the maze, on the neutron dose equivalent, in the 15MV linear accelerator vault, with two bend maze. These two bends of the maze were covered by 32 points where the neutron dose equivalent was measured. There is one available method for estimation of the neutron dose equivalent at the entrance door of the two bend maze which was tested using the results of the measurements. The results of this study show that the neutron equivalent dose at the door of the two bend maze was reduced almost three orders of magnitude. The measured TVD in the first bend (closer to the inner maze entrance) is about 5m. The measured TVD result is close to the TVD values usually used in the proposed models for estimation of neutron dose equivalent at the entrance door of the single bend maze. The results also determined that the TVD in the second bend (next to the maze entrance door) is significantly lower than the TVD values found in the first maze bend. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.

    PubMed

    Fleming, Stephen M; Daw, Nathaniel D

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a "second-order" inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one's own actions to metacognitive judgments. In addition, the model provides insight into why subjects' metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Electroencephalography in ellipsoidal geometry with fourth-order harmonics.

    PubMed

    Alcocer-Sosa, M; Gutierrez, D

    2016-08-01

    We present a solution to the electroencephalographs (EEG) forward problem of computing the scalp electric potentials for the case when the head's geometry is modeled using a four-shell ellipsoidal geometry and the brain sources with an equivalent current dipole (ECD). The proposed solution includes terms up to the fourth-order ellipsoidal harmonics and we compare this new approximation against those that only considered up to second- and third-order harmonics. Our comparisons use as reference a solution in which a tessellated volume approximates the head and the forward problem is solved through the boundary element method (BEM). We also assess the solution to the inverse problem of estimating the magnitude of an ECD through different harmonic approximations. Our results show that the fourth-order solution provides a better estimate of the ECD in comparison to lesser order ones.

  6. Self-Evaluation of Decision-Making: A General Bayesian Framework for Metacognitive Computation

    PubMed Central

    2017-01-01

    People are often aware of their mistakes, and report levels of confidence in their choices that correlate with objective performance. These metacognitive assessments of decision quality are important for the guidance of behavior, particularly when external feedback is absent or sporadic. However, a computational framework that accounts for both confidence and error detection is lacking. In addition, accounts of dissociations between performance and metacognition have often relied on ad hoc assumptions, precluding a unified account of intact and impaired self-evaluation. Here we present a general Bayesian framework in which self-evaluation is cast as a “second-order” inference on a coupled but distinct decision system, computationally equivalent to inferring the performance of another actor. Second-order computation may ensue whenever there is a separation between internal states supporting decisions and confidence estimates over space and/or time. We contrast second-order computation against simpler first-order models in which the same internal state supports both decisions and confidence estimates. Through simulations we show that second-order computation provides a unified account of different types of self-evaluation often considered in separate literatures, such as confidence and error detection, and generates novel predictions about the contribution of one’s own actions to metacognitive judgments. In addition, the model provides insight into why subjects’ metacognition may sometimes be better or worse than task performance. We suggest that second-order computation may underpin self-evaluative judgments across a range of domains. PMID:28004960

  7. Equivalent Relaxations of Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, S; Low, SH; Teeraratkul, T

    2015-03-01

    Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results implymore » that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results.« less

  8. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  9. Thermally assisted OSL application for equivalent dose estimation; comparison of multiple equivalent dose values as well as saturation levels determined by luminescence and ESR techniques for a sedimentary sample collected from a fault gouge

    NASA Astrophysics Data System (ADS)

    Şahiner, Eren; Meriç, Niyazi; Polymeris, George S.

    2017-02-01

    Equivalent dose estimation (De) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different De estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible De estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large De values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is interesting that most of De values yielded using different luminescence signals agree with each other and ESR Ge center has very large D0 values. The results presented above highly support the argument that the stability and the initial ESR signal of the Ge center is highly sample-dependent, without any instability problems for the cases of quartz resulting from fault gouge.

  10. A Note on Substructuring Preconditioning for Nonconforming Finite Element Approximations of Second Order Elliptic Problems

    NASA Technical Reports Server (NTRS)

    Maliassov, Serguei

    1996-01-01

    In this paper an algebraic substructuring preconditioner is considered for nonconforming finite element approximations of second order elliptic problems in 3D domains with a piecewise constant diffusion coefficient. Using a substructuring idea and a block Gauss elimination, part of the unknowns is eliminated and the Schur complement obtained is preconditioned by a spectrally equivalent very sparse matrix. In the case of quasiuniform tetrahedral mesh an appropriate algebraic multigrid solver can be used to solve the problem with this matrix. Explicit estimates of condition numbers and implementation algorithms are established for the constructed preconditioner. It is shown that the condition number of the preconditioned matrix does not depend on either the mesh step size or the jump of the coefficient. Finally, numerical experiments are presented to illustrate the theory being developed.

  11. A numerical solution of a singular boundary value problem arising in boundary layer theory.

    PubMed

    Hu, Jiancheng

    2016-01-01

    In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.

  12. Theoretical study of homonuclear J coupling between quadrupolar spins: single-crystal, DOR, and J-resolved NMR.

    PubMed

    Perras, Frédéric A; Bryce, David L

    2014-05-01

    The theory describing homonuclear indirect nuclear spin-spin coupling (J) interactions between pairs of quadrupolar nuclei is outlined and supported by numerical calculations. The expected first-order multiplets for pairs of magnetically equivalent (A2), chemically equivalent (AA'), and non-equivalent (AX) quadrupolar nuclei are given. The various spectral changeovers from one first-order multiplet to another are investigated with numerical simulations using the SIMPSON program and the various thresholds defining each situation are given. The effects of chemical equivalence, as well as quadrupolar coupling, chemical shift differences, and dipolar coupling on double-rotation (DOR) and J-resolved NMR experiments for measuring homonuclear J coupling constants are investigated. The simulated J coupling multiplets under DOR conditions largely resemble the ideal multiplets predicted for single crystals, and a characteristic multiplet is expected for each of the A2, AA', and AX cases. The simulations demonstrate that it should be straightforward to distinguish between magnetic inequivalence and equivalence using J-resolved NMR, as was speculated previously. Additionally, it is shown that the second-order quadrupolar-dipolar cross-term does not affect the splittings in J-resolved experiments. Overall, the homonuclear J-resolved experiment for half-integer quadrupolar nuclei is demonstrated to be robust with respect to the effects of first- and second-order quadrupolar coupling, dipolar coupling, and chemical shift differences. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. A physical optics/equivalent currents model for the RCS of trihedral corner reflectors

    NASA Technical Reports Server (NTRS)

    Balanis, Constantine A.; Polycarpou, Anastasis C.

    1993-01-01

    The scattering in the interior regions of both square and triangular trihedral corner reflectors is examined. The theoretical model presented combines geometrical and physical optics (GO and PO), used to account for reflection terms, with equivalent edge currents (EEC), used to account for first-order diffractions from the edges. First-order, second-order, and third-order reflection terms are included. Calculating the first-order reflection terms involves integrating over the entire surface of the illuminated plate. Calculating the second- and third-order reflection terms, however, is much more difficult because the illuminated area is an arbitrary polygon whose shape is dependent upon the incident angles. The method for determining the area of integration is detailed. Extensive comparisons between the high-frequency model, Finite-Difference Time-Domain (FDTD) and experimental data are used for validation of the radar cross section (RCS) of both square and triangular trihedral reflectors.

  14. A novel method for identification of lithium-ion battery equivalent circuit model parameters considering electrochemical properties

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Lu, Jinling; Yuan, Shifei; Yang, Jun; Zhou, Xuan

    2017-03-01

    This paper proposes a novel parameter identification method for the lithium-ion (Li-ion) battery equivalent circuit model (ECM) considering the electrochemical properties. An improved pseudo two-dimension (P2D) model is established on basis of partial differential equations (PDEs), since the electrolyte potential is simplified from the nonlinear to linear expression while terminal voltage can be divided into the electrolyte potential, open circuit voltage (OCV), overpotential of electrodes, internal resistance drop, and so on. The model order reduction process is implemented by the simplification of the PDEs using the Laplace transform, inverse Laplace transform, Pade approximation, etc. A unified second order transfer function between cell voltage and current is obtained for the comparability with that of ECM. The final objective is to obtain the relationship between the ECM resistances/capacitances and electrochemical parameters such that in various conditions, ECM precision could be improved regarding integration of battery interior properties for further applications, e.g., SOC estimation. Finally simulation and experimental results prove the correctness and validity of the proposed methodology.

  15. Equivalent magnetization over the World's Ocean and the World Digital Magnetic Anomaly Map

    NASA Astrophysics Data System (ADS)

    Dyment, Jerome; Choi, Yujin; Hamoudi, Mohamed; Thébault, Erwan; Quesnel, Yoann; Roest, Walter; Lesur, Vincent

    2014-05-01

    As a by-product of our recent work to build a candidate model over the oceans for the second version of the World Digital Magnetic Anomaly Map (WDMAM), we derived global distributions of the equivalent magnetization in oceanic domains. In a first step, we use classic point source forward modeling on a spherical Earth to build a forward model of the marine magnetic anomalies at sea-surface. We estimate magnetization vectors using the age map of the ocean floor, the relative plate motions, the apparent polar wander path for Africa, and a geomagnetic reversal time scale. We assume two possible magnetized source geometry, involving both a 1 km-thick layer bearing a 10 A/m magnetization either on a regular spherical shell with a constant, 5 km-deep, bathymetry (simple geometry) or following the topography of the oceanic basement as defined by the bathymetry and sedimentary thickness (realistic geometry). Adding a present-day geomagnetic field model allows the computation of our initial magnetic anomaly model. In a second step, we adjust this model to the existing marine magnetic anomaly data, in order to make it consistent with these data. To do so, we extract synthetic magnetic along the ship tracks for which real data are available and we compare quantitatively the measured and computed anomalies on 100, 200 or 400 km-long sliding windows (depending the spreading rate). Among the possible comparison criteria, we discard the maximal range - too dependent on local values - and the correlation and coherency - the geographical adjustment between model and data being not accurate enough - to favor the standard deviation around the mean value. The ratio between the standard deviations of data and model on each sliding window represent an estimate of the magnetization ratio causing the anomalies, which we interpolate to adjust the initial magnetic anomaly model to the data and therefore compute a final model to be included in our WDMAM candidate over the oceanic regions lacking data. The above ratio, after division by the magnetization of 10 A/m used in the model, represents an estimate of the equivalent magnetization under the considered magnetized source geometry. The resulting distributions of equivalent magnetization are further discussed in terms of mid-ocean ridges, presence of hotspots and oceanic plateaus, and the age of the oceanic lithosphere. Global marine magnetic data sets and models represent a useful tool to assess first order magnetic properties of the oceanic lithosphere.

  16. The Bassi Rebay 1 scheme is a special case of the Symmetric Interior Penalty formulation for discontinuous Galerkin discretisations with Gauss-Lobatto points

    NASA Astrophysics Data System (ADS)

    Manzanero, Juan; Rueda-Ramírez, Andrés M.; Rubio, Gonzalo; Ferrer, Esteban

    2018-06-01

    In the discontinuous Galerkin (DG) community, several formulations have been proposed to solve PDEs involving second-order spatial derivatives (e.g. elliptic problems). In this paper, we show that, when the discretisation is restricted to the usage of Gauss-Lobatto points, there are important similarities between two common choices: the Bassi-Rebay 1 (BR1) method, and the Symmetric Interior Penalty (SIP) formulation. This equivalence enables the extrapolation of properties from one scheme to the other: a sharper estimation of the minimum penalty parameter for the SIP stability (compared to the more general estimate proposed by Shahbazi [1]), more efficient implementations of the BR1 scheme, and the compactness of the BR1 method for straight quadrilateral and hexahedral meshes.

  17. Peripheral Refraction, Peripheral Eye Length, and Retinal Shape in Myopia.

    PubMed

    Verkicharla, Pavan K; Suheimat, Marwan; Schmid, Katrina L; Atchison, David A

    2016-09-01

    To investigate how peripheral refraction and peripheral eye length are related to retinal shape. Relative peripheral refraction (RPR) and relative peripheral eye length (RPEL) were determined in 36 young adults (M +0.75D to -5.25D) along horizontal and vertical visual field meridians out to ±35° and ±30°, respectively. Retinal shape was determined in terms of vertex radius of curvature Rv, asphericity Q, and equivalent radius of curvature REq using a partial coherence interferometry method involving peripheral eye lengths and model eye raytracing. Second-order polynomial fits were applied to RPR and RPEL as functions of visual field position. Linear regressions were determined for the fits' second order coefficients and for retinal shape estimates as functions of central spherical refraction. Linear regressions investigated relationships of RPR and RPEL with retinal shape estimates. Peripheral refraction, peripheral eye lengths, and retinal shapes were significantly affected by meridian and refraction. More positive (hyperopic) relative peripheral refraction, more negative RPELs, and steeper retinas were found along the horizontal than along the vertical meridian and in myopes than in emmetropes. RPR and RPEL, as represented by their second-order fit coefficients, correlated significantly with retinal shape represented by REq. Effects of meridian and refraction on RPR and RPEL patterns are consistent with effects on retinal shape. Patterns derived from one of these predict the others: more positive (hyperopic) RPR predicts more negative RPEL and steeper retinas, more negative RPEL predicts more positive relative peripheral refraction and steeper retinas, and steeper retinas derived from peripheral eye lengths predict more positive RPR.

  18. Generation of a non-zero discord bipartite state with classical second-order interference.

    PubMed

    Choi, Yujun; Hong, Kang-Hee; Lim, Hyang-Tag; Yune, Jiwon; Kwon, Osung; Han, Sang-Wook; Oh, Kyunghwan; Kim, Yoon-Ho; Kim, Yong-Su; Moon, Sung

    2017-02-06

    We report an investigation on quantum discord in classical second-order interference. In particular, we theoretically show that a bipartite state with D = 0.311 of discord can be generated via classical second-order interference. We also experimentally verify the theory by obtaining D = 0.197 ± 0.060 of non-zero discord state. Together with the fact that the nonclassicalities originated from physical constraints and information theoretic perspectives are not equivalent, this result provides an insight to understand the nature of quantum discord.

  19. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  20. Neural Net Gains Estimation Based on an Equivalent Model

    PubMed Central

    Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory

    2016-01-01

    A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system. PMID:27366146

  1. Neural Net Gains Estimation Based on an Equivalent Model.

    PubMed

    Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory

    2016-01-01

    A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system.

  2. Determination of nongeometric effects: equivalence between Artmann's and Tamir's generalized methods.

    PubMed

    Perez, Liliana I; Echarri, Rodolfo M; Garea, María T; Santiago, Guillermo D

    2011-03-01

    This work shows that all first- and second-order nongeometric effects on propagation, total or partial reflection, and transmission can be understood and evaluated considering the superposition of two plane waves. It also shows that this description yields results that are qualitatively and quantitatively compatible with those obtained by Fourier analysis of beams with Gaussian intensity distribution in any type of interface. In order to show this equivalence, we start by describing the first- and second-order nongeometric effects, and we calculate them analytically by superposing two plane waves. Finally, these results are compared with those obtained for the nongeometric effects of Gaussian beams in isotropic interfaces and are applied to different types of interfaces. A simple analytical expression for the angular shift is obtained considering the transmission of an extraordinary beam in a uniaxial-isotropic interface.

  3. Carbon and energy saving markets in compressed air

    NASA Astrophysics Data System (ADS)

    Cipollone, R.

    2015-08-01

    CO2 reduction and fossil fuel saving represent two of the cornerstones of the environmental commitments of all the countries of the world. The first engagement is of a medium to long term type, and unequivocally calls for a new energetic era. The second delays in time the fossil fuel technologies to favour an energetic transition. In order to sustain the two efforts, new immaterial markets have been established in almost all the countries of the world, whose exchanges (purchases and sales) concern CO2 emissions and equivalent fossil fuels that have not been emitted or burned. This paper goes deep inside two aspects not yet exploited: specific CO2 emissions and equivalent fossil fuel burned, as a function of compressed air produced. Reference is made to the current compressor technology, carefully analysing CAGI's (Compressed Air Gas Institute) data and integrating it with the PNUEROP (European Association of manufacturers of compressors, vacuum pumps, pneumatic tools and allied equipment) contribution on the compressor European market. On the base of energy saving estimates that could be put in place, this article also estimates the financial value of the CO2 emissions and fossil fuels avoided.

  4. Computational micromechanics of woven composites

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang

    1991-01-01

    The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.

  5. Generalized Israel junction conditions for a fourth-order brane world

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balcerzak, Adam; Dabrowski, Mariusz P.

    2008-01-15

    We discuss a general fourth-order theory of gravity on the brane. In general, the formulation of the junction conditions (except for Euler characteristics such as Gauss-Bonnet term) leads to the higher powers of the delta function and requires regularization. We suggest the way to avoid such a problem by imposing the metric and its first derivative to be regular at the brane, while the second derivative to have a kink, the third derivative of the metric to have a step function discontinuity, and no sooner as the fourth derivative of the metric to give the delta function contribution to themore » field equations. Alternatively, we discuss the reduction of the fourth-order gravity to the second-order theory by introducing an extra tensor field. We formulate the appropriate junction conditions on the brane. We prove the equivalence of both theories. In particular, we prove the equivalence of the junction conditions with different assumptions related to the continuity of the metric along the brane.« less

  6. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    NASA Technical Reports Server (NTRS)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  7. Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?

    PubMed Central

    Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.

    2010-01-01

    This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540

  8. Equivalent magnetization over the World's Ocean

    NASA Astrophysics Data System (ADS)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Erwan, T.; Lesur, V.

    2014-12-01

    As a by-product of our recent work to build a candidate model over the oceans for the World Digital Magnetic Anomaly Map (WDMAM) version 2, we derived global distributions of the equivalent magnetization in oceanic domains. In a first step, we use classic point source forward modeling on a spherical Earth to build a forward model of the marine magnetic anomalies at sea-surface. We estimate magnetization vectors using the age map of the ocean floor, the relative plate motions, the apparent polar wander path for Africa, and a geomagnetic reversal time scale. As magnetized source geometry, we assume 1 km-thick layer bearing a 10 A/m magnetization following the topography of the oceanic basement as defined by the bathymetry and sedimentary thickness. Adding a present-day geomagnetic field model allows the computation of our initial magnetic anomaly model. In a second step, we adjust this model to the existing marine magnetic anomaly data, in order to make it consistent with these data. To do so, we extract synthetic magnetic along the ship tracks for which real data are available and we compare quantitatively the measured and computed anomalies on 100, 200 or 400 km-long sliding windows (depending the spreading rate). Among the possible comparison criteria, we discard the maximal range - too dependent on local values - and the correlation and coherency - the geographical adjustment between model and data being not accurate enough - to favor the standard deviation around the mean value. The ratio between the standard deviations of data and model on each sliding window represent an estimate of the magnetization ratio causing the anomalies, which we interpolate to adjust the initial magnetic anomaly model to the data and therefore compute a final model to be included in our WDMAM candidate over the oceanic regions lacking data. The above ratio, after division by the magnetization of 10 A/m used in the model, represents an estimate of the equivalent magnetization under the considered magnetized source geometry. The resulting distributions of equivalent magnetization are discussed in terms of mid-ocean ridges, presence of hotspots and oceanic plateaus, and the age of the oceanic lithosphere. Global marine magnetic data sets and models represent a useful tool to assess first order magnetic properties of the oceanic lithosphere.

  9. Nonlinear estimation theory applied to orbit determination

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.

    1972-01-01

    The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.

  10. Reduction of the two dimensional stationary Navier-Stokes problem to a sequence of Fredholm integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1981-01-01

    Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.

  11. Aircraft Fault Detection Using Real-Time Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2016-01-01

    A real-time method for estimating time-varying aircraft frequency responses from input and output measurements was demonstrated. The Bat-4 subscale airplane was used with NASA Langley Research Center's AirSTAR unmanned aerial flight test facility to conduct flight tests and collect data for dynamic modeling. Orthogonal phase-optimized multisine inputs, summed with pilot stick and pedal inputs, were used to excite the responses. The aircraft was tested in its normal configuration and with emulated failures, which included a stuck left ruddervator and an increased command path latency. No prior knowledge of a dynamic model was used or available for the estimation. The longitudinal short period dynamics were investigated in this work. Time-varying frequency responses and stability margins were tracked well using a 20 second sliding window of data, as compared to a post-flight analysis using output error parameter estimation and a low-order equivalent system model. This method could be used in a real-time fault detection system, or for other applications of dynamic modeling such as real-time verification of stability margins during envelope expansion tests.

  12. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  13. Periodic solutions of second-order nonlinear difference equations containing a small parameter. II - Equivalent linearization

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.

    1985-01-01

    The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.

  14. Boundary conditions estimation on a road network using compressed sensing.

    DOT National Transportation Integrated Search

    2016-02-01

    This report presents a new boundary condition estimation framework for transportation networks in which : the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a : Hamilton-Jacobi equation, we pose th...

  15. Estimation Model for Magnetic Properties of Stamped Electrical Steel Sheet

    NASA Astrophysics Data System (ADS)

    Kashiwara, Yoshiyuki; Fujimura, Hiroshi; Okamura, Kazuo; Imanishi, Kenji; Yashiki, Hiroyoshi

    Less deterioration in magnetic properties of electrical steel sheets in the process of stamping out iron-core are necessary in order to maintain its performance. First, the influence of plastic strain and stress on magnetic properties was studied by test pieces, in which plastic strain was added uniformly and residual stress was not induced. Because the influence of plastic strain was expressed by equivalent plastic strain, at each equivalent plastic strain state the influence of load stress was investigated. Secondly, elastic limit was determined about 60% of macroscopic yield point (MYP), and it was found to agree with stress limit inducing irreversible deterioration in magnetic properties. Therefore simulation models, where beyond elastic limit plastic deformation begins and magnetic properties are deteriorated steeply, are proposed. Besides considered points in the deformation analysis are strain-rate sensitivity of flow stress, anisotropy under deformation, and influence of stress triaxiality on fracture. Finally, proposed models have been shown to be valid, because magnetic properties of 5mm width rectangular sheets stamped out from non-oriented electrical steel sheet (35A250 JIS grade) can be estimated with good accuracy. It is concluded that the elastic limit must be taken into account in both stamping process simulation and magnetic field calculation.

  16. Edge Pushing is Equivalent to Vertex Elimination for Computing Hessians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Mu; Pothen, Alex; Hovland, Paul

    We prove the equivalence of two different Hessian evaluation algorithms in AD. The first is the Edge Pushing algorithm of Gower and Mello, which may be viewed as a second order Reverse mode algorithm for computing the Hessian. In earlier work, we have derived the Edge Pushing algorithm by exploiting a Reverse mode invariant based on the concept of live variables in compiler theory. The second algorithm is based on eliminating vertices in a computational graph of the gradient, in which intermediate variables are successively eliminated from the graph, and the weights of the edges are updated suitably. We provemore » that if the vertices are eliminated in a reverse topological order while preserving symmetry in the computational graph of the gradient, then the Vertex Elimination algorithm and the Edge Pushing algorithm perform identical computations. In this sense, the two algorithms are equivalent. This insight that unifies two seemingly disparate approaches to Hessian computations could lead to improved algorithms and implementations for computing Hessians. Read More: http://epubs.siam.org/doi/10.1137/1.9781611974690.ch11« less

  17. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  18. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    PubMed

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  19. Michelson interferometer with diffractively-coupled arm resonators in second-order Littrow configuration.

    PubMed

    Britzger, Michael; Wimmer, Maximilian H; Khalaidovski, Alexander; Friedrich, Daniel; Kroker, Stefanie; Brückner, Frank; Kley, Ernst-Bernhard; Tünnermann, Andreas; Danzmann, Karsten; Schnabel, Roman

    2012-11-05

    Michelson-type laser-interferometric gravitational-wave (GW) observatories employ very high light powers as well as transmissively-coupled Fabry-Perot arm resonators in order to realize high measurement sensitivities. Due to the absorption in the transmissive optics, high powers lead to thermal lensing and hence to thermal distortions of the laser beam profile, which sets a limit on the maximal light power employable in GW observatories. Here, we propose and realize a Michelson-type laser interferometer with arm resonators whose coupling components are all-reflective second-order Littrow gratings. In principle such gratings allow high finesse values of the resonators but avoid bulk transmission of the laser light and thus the corresponding thermal beam distortion. The gratings used have three diffraction orders, which leads to the creation of a second signal port. We theoretically analyze the signal response of the proposed topology and show that it is equivalent to a conventional Michelson-type interferometer. In our proof-of-principle experiment we generated phase-modulation signals inside the arm resonators and detected them simultaneously at the two signal ports. The sum signal was shown to be equivalent to a single-output-port Michelson interferometer with transmissively-coupled arm cavities, taking into account optical loss. The proposed and demonstrated topology is a possible approach for future all-reflective GW observatory designs.

  20. Second-order sliding mode control with experimental application.

    PubMed

    Eker, Ilyas

    2010-07-01

    In this article, a second-order sliding mode control (2-SMC) is proposed for second-order uncertain plants using equivalent control approach to improve the performance of control systems. A Proportional + Integral + Derivative (PID) sliding surface is used for the sliding mode. The sliding mode control law is derived using direct Lyapunov stability approach and asymptotic stability is proved theoretically. The performance of the closed-loop system is analysed through an experimental application to an electromechanical plant to show the feasibility and effectiveness of the proposed second-order sliding mode control and factors involved in the design. The second-order plant parameters are experimentally determined using input-output measured data. The results of the experimental application are presented to make a quantitative comparison with the traditional (first-order) sliding mode control (SMC) and PID control. It is demonstrated that the proposed 2-SMC system improves the performance of the closed-loop system with better tracking specifications in the case of external disturbances, better behavior of the output and faster convergence of the sliding surface while maintaining the stability. 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  2. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  3. Order Reduction, Projectability and Constraints of Second-Order Field Theories and Higher-Order Mechanics

    NASA Astrophysics Data System (ADS)

    Gaset, Jordi; Román-Roy, Narciso

    2016-12-01

    The projectability of Poincaré-Cartan forms in a third-order jet bundle J3π onto a lower-order jet bundle is a consequence of the degenerate character of the corresponding Lagrangian. This fact is analyzed using the constraint algorithm for the associated Euler-Lagrange equations in J3π. The results are applied to study the Hilbert Lagrangian for the Einstein equations (in vacuum) from a multisymplectic point of view. Thus we show how these equations are a consequence of the application of the constraint algorithm to the geometric field equations, meanwhile the other constraints are related with the fact that this second-order theory is equivalent to a first-order theory. Furthermore, the case of higher-order mechanics is also studied as a particular situation.

  4. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  5. Punjabis Learning English: Word Order. TEAL Occasional Papers, Vol. l, 1977.

    ERIC Educational Resources Information Center

    Seesahai, Maureen

    When teaching English as a second language to speakers of Punjabi, it is useful for the teacher to have some knowledge of the students' native language. This paper analyzes the differences in word order between English and Punjabi. The five basic sentence patterns in English are contrasted with the equivalent sentence patterns in Punjabi.…

  6. Spacecraft attitude determination using a second-order nonlinear filter

    NASA Technical Reports Server (NTRS)

    Vathsal, S.

    1987-01-01

    The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.

  7. The Semipalatinsk nuclear test site: a first analysis of solid cancer incidence (selected sites) due to test-related radiation.

    PubMed

    Gusev, B I; Rosenson, R I; Abylkassimova, Z N

    1998-10-01

    Since 1956, cancer incidences have been analysed in several rayons of the Semipalatinsk oblast, with cross-sectional analyses being conducted every 5 years. Data on different tumor localizations were recorded within a heavily contaminated so-called main area of nine villages (estimated average effective equivalent dose about 2000 mSv) and a so-called control area (estimated average effective equivalent dose about 70 mSv), each including approximately 10000 persons. Up to 1970, the excess cancer incidence in the exposed villages was observed to have increased; after 1970, a decrease was noted, followed by a second increase in the late 1980s. The main sites of excess cancer included the esophagus, stomach, and liver. Up to 1970, the esophagus cancer incidence was predominant, but it decreased thereafter, while the incidence of stomach and liver cancers increased. The second peak of excess cancer rates was mainly due to lung, breast, and thyroid carcinomas.

  8. The radiated noise from isotropic turbulence revisited

    NASA Technical Reports Server (NTRS)

    Lilley, Geoffrey M.

    1993-01-01

    The noise radiated from isotropic turbulence at low Mach numbers and high Reynolds numbers, as derived by Proudman (1952), was the first application of Lighthill's Theory of Aerodynamic Noise to a complete flow field. The theory presented by Proudman involves the assumption of the neglect of retarded time differences and so replaces the second-order retarded-time and space covariance of Lighthill's stress tensor, Tij, and in particular its second time derivative, by the equivalent simultaneous covariance. This assumption is a valid approximation in the derivation of the second partial derivative of Tij/derivative of t exp 2 covariance at low Mach numbers, but is not justified when that covariance is reduced to the sum of products of the time derivatives of equivalent second-order velocity covariances as required when Gaussian statistics are assumed. The present paper removes these assumptions and finds that although the changes in the analysis are substantial, the change in the numerical result for the total acoustic power is small. The present paper also considers an alternative analysis which does not neglect retarded times. It makes use of the Lighthill relationship, whereby the fourth-order Tij retarded-time covariance is evaluated from the square of similar second order covariance, which is assumed known. In this derivation, no statistical assumptions are involved. This result, using distributions for the second-order space-time velocity squared covariance based on the Direct Numerical Simulation (DNS) results of both Sarkar and Hussaini(1993) and Dubois(1993), is compared with the re-evaluation of Proudman's original model. These results are then compared with the sound power derived from a phenomenological model based on simple approximations to the retarded-time/space covariance of Txx. Finally, the recent numerical solutions of Sarkar and Hussaini(1993) for the acoustic power are compared with the results obtained from the analytic solutions.

  9. A study of the applicability of nucleation theory to quasi-thermodynamic transitions of second and higher Ehrenfest-order

    NASA Technical Reports Server (NTRS)

    Barker, R. E., Jr.; Campbell, K. W.

    1985-01-01

    The applicability of classical nucleation theory to second (and higher) order thermodynamic transitions in the Ehrenfest sense has been investigated and expressions have been derived upon which the qualitative and quantitative success of the basic approach must ultimately depend. The expressions describe the effect of temperature undercooling, hydrostatic pressure, and tensile stress upon the critical parameters, the critical nucleus size, and critical free energy barrier, for nucleation in a thermodynamic transition of any general order. These expressions are then specialized for the case of first and second order transitions. The expressions for the case of undercooling are then used in conjunction with literature data to estimate values for the critical quantities in a system undergoing a pseudo-second order transition (the glass transition in polystyrene). Methods of estimating the interfacial energy gamma in systems undergoing a first and second order transition are also discussed.

  10. Quantitative and qualitative estimates of cross-border tobacco shopping and tobacco smuggling in France.

    PubMed

    Lakhdar, C Ben

    2008-02-01

    In France, cigarette sales have fallen sharply, especially in border areas, since the price increases of 2003 and 2004. It was proposed that these falls were not due to people quitting smoking but rather to increased cross-border sales of tobacco and/or smuggling. This paper aims to test this proposition. Three approaches have been used. First, cigarette sales data from French sources for the period 1999-2006 were collected, and a simulation of the changes seen within these sales was carried out in order to estimate what the sales situation would have looked like without the presence of foreign tobacco. Second, the statements regarding tobacco consumed reported by the French population with registered tobacco sales were compared. Finally, in order to identify the countries of origin of foreign tobacco entering France, we collected a random sample of cigarette packs from a waste collection centre. According to the first method, cross-border shopping and smuggling of tobacco accounted for 8635 tones of tobacco in 2004, 9934 in 2005, and 9930 in 2006, ie, between 14% and 17% of total sales. The second method gave larger results: the difference between registered cigarette sales and cigarettes declared as being smoked was around 12,000 to 13,000 tones in 2005, equivalent to 20% of legal sales. The collection of cigarette packs at a waste collection centre showed that foreign cigarettes accounted for 18.6% of our sample in 2005 and 15.5% in 2006. France seems mainly to be a victim of cross-border purchasing of tobacco products, with the contraband market for tobacco remaining modest. in order to avoid cross-border purchases, an increased harmonization of national policies on the taxation of tobacco products needs to be envisaged by the European Union.

  11. Exam Question Exchange.

    ERIC Educational Resources Information Center

    Alexander, John J., Ed.

    1978-01-01

    Two exam questions are presented. One suitable for advanced undergraduate or beginning graduate courses in organic chemistry, is on equivalent expressions for the description of several pericyclic reactions. The second, for general chemistry students, asks for an estimation of the rate of decay of a million-year-old Uranium-238 sample. (BB)

  12. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  13. B{sub K} with two flavors of dynamical overlap fermions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aoki, S.; Riken BNL Research Center, Brookhaven National Laboratory, Upton, New York 11973; Fukaya, H.

    2008-05-01

    We present a two-flavor QCD calculation of B{sub K} on a 16{sup 3}x32 lattice at a{approx}0.12 fm (or equivalently a{sup -1}=1.67 GeV). Both valence and sea quarks are described by the overlap fermion formulation. The matching factor is calculated nonperturbatively with the so-called RI/MOM scheme. We find that the lattice data are well described by the next-to-leading order (NLO) partially quenched chiral perturbation theory (PQChPT) up to around a half of the strange quark mass (m{sub s}{sup phys}/2). The data at quark masses heavier than m{sub s}{sup phys}/2 are fitted including a part of next-to-next-to-leading order terms. We obtain B{submore » K}{sup MS}(2 GeV)=0.537(4)(40), where the first error is statistical and the second is an estimate of systematic uncertainties from finite volume, fixing topology, the matching factor, and the scale setting.« less

  14. Observed galaxy number counts on the lightcone up to second order: I. Main result

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertacca, Daniele; Maartens, Roy; Clarkson, Chris, E-mail: daniele.bertacca@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: chris.clarkson@gmail.com

    2014-09-01

    We present the galaxy number overdensity up to second order in redshift space on cosmological scales for a concordance model. The result contains all general relativistic effects up to second order that arise from observing on the past light cone, including all redshift effects, lensing distortions from convergence and shear, and contributions from velocities, Sachs-Wolfe, integrated SW and time-delay terms. This result will be important for accurate calculation of the bias on estimates of non-Gaussianity and on precision parameter estimates, introduced by nonlinear projection effects.

  15. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    PubMed Central

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  16. Second order kinetic theory of parallel momentum transport in collisionless drift wave turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang, E-mail: lyang13@mails.tsinghua.edu.cn; Southwestern Institute of Physics, Chengdu 610041; Gao, Zhe

    A second order kinetic model for turbulent ion parallel momentum transport is presented. A new nonresonant second order parallel momentum flux term is calculated. The resonant component of the ion parallel electrostatic force is the momentum source, while the nonresonant component of the ion parallel electrostatic force compensates for that of the nonresonant second order parallel momentum flux. The resonant component of the kinetic momentum flux can be divided into three parts, including the pinch term, the diffusive term, and the residual stress. By reassembling the pinch term and the residual stress, the residual stress can be considered as amore » pinch term of parallel wave-particle resonant velocity, and, therefore, may be called as “resonant velocity pinch” term. Considering the resonant component of the ion parallel electrostatic force is the transfer rate between resonant ions and waves (or, equivalently, nonresonant ions), a conservation equation of the parallel momentum of resonant ions and waves is obtained.« less

  17. An estimator for the standard deviation of a natural frequency. II.

    NASA Technical Reports Server (NTRS)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.

  18. The Historical Loss Scale: Longitudinal measurement equivalence and prospective links to anxiety among North American indigenous adolescents.

    PubMed

    Armenta, Brian E; Whitbeck, Les B; Habecker, Patrick N

    2016-01-01

    Thoughts of historical loss (i.e., the loss of culture, land, and people as a result of colonization) are conceptualized as a contributor to the contemporary distress experienced by North American Indigenous populations. Although discussions of historical loss and related constructs (e.g., historical trauma) are widespread within the Indigenous literature, empirical efforts to understand the consequence of historical loss are limited, partially because of the lack of valid assessments. In this study we evaluated the longitudinal measurement properties of the Historical Loss Scale (HLS)-a standardized measure that was developed to systematically examine the frequency with which Indigenous individuals think about historical loss-among a sample of North American Indigenous adolescents. We also test the hypothesis that thoughts of historical loss can be psychologically distressing. Via face-to-face interviews, 636 Indigenous adolescents from a single cultural group completed the HLS and a measure of anxiety at 4 time-points, which were separated by 1- to 2-year intervals (Mage = 12.09 years, SD = .86, 50.0% girls at baseline). Responses to the HLS were explained well by 3-factor (i.e., cultural loss, loss of people, and cultural mistreatment) and second-order factor structures. Both of these factor structures held full longitudinal metric (i.e., factor loadings) and scalar (i.e., intercepts) equivalence. In addition, using the second-order factor structure, more frequent thoughts of historical loss were associated with increased anxiety. The identified 3-factor and second-order HLS structures held full longitudinal measurement equivalence. Moreover, as predicted, our results suggest that historical loss can be psychologically distressing for Indigenous adolescents. (c) 2016 APA, all rights reserved).

  19. Variability in hand-arm vibration during grinding operations.

    PubMed

    Liljelind, Ingrid; Wahlström, Jens; Nilsson, Leif; Toomingas, Allan; Burström, Lage

    2011-04-01

    Measurements of exposure to vibrations from hand-held tools are often conducted on a single occasion. However, repeated measurements may be crucial for estimating the actual dose with good precision. In addition, knowledge of determinants of exposure could be used to improve working conditions. The aim of this study was to assess hand-arm vibration (HAV) exposure during different grinding operations, in order to obtain estimates of the variance components and to evaluate the effect of work postures. Ten experienced operators used two compressed air-driven angle grinders of the same make in a simulated work task at a workplace. One part of the study consisted of using a grinder while assuming two different working postures: at a standard work bench (low) and on a wall with arms elevated and the work area adjusted to each operator's height (high). The workers repeated the task three times. In another part of the study, investigating the wheel wear, for each grinder, the operators used two new grinding wheels and with each wheel the operator performed two consecutive 1-min grinding tasks. Both grinding tasks were conducted on weld puddles of mild steel on a piece of mild steel. Measurements were taken according to ISO-standard 5349 [the equivalent hand-arm-weighted acceleration (m s(-2)) averaged over 1 min]. Mixed- and random-effects models were used to investigate the influence of the fixed variables and to estimate variance components. The equivalent hand-arm-weighted acceleration assessed when the task was performed on the bench and at the wall was 3.2 and 3.3 m s(-2), respectively. In the mixed-effects model, work posture was not a significant variable. The variables 'operator' and 'grinder' together explained only 12% of the exposure variability and 'grinding wheel' explained 47%; the residual variability of 41% remained unexplained. When the effect of grinding wheel wear was investigated in the random-effects model, 37% of the variability was associated with the wheel while minimal variability was associated with the operator or the grinder and 37% was unexplained. The interaction effect of grinder and operator explained 18% of the variability. In the wheel wear test, the equivalent hand-arm-weighted accelerations for Grinder 1 during the first and second grinding minutes were 3.4 and 2.9 m s(-2), respectively, and for Grinder 2, they were 3.1 and 2.9 m s(-2), respectively. For Grinder 1, the equivalent hand-arm-weighted acceleration during the first grinding minute was significantly higher (P = 0.04) than during the second minute. Work posture during grinding operations does not appear to affect the level of HAV. Grinding wheels explained much of the variability in this study, but almost 40% of the variance remained unexplained. The considerable variability in the equivalent hand-arm-weighted acceleration has an impact on the risk assessment at both the group and the individual level.

  20. Integration of second cancer risk calculations in a radiotherapy treatment planning system

    NASA Astrophysics Data System (ADS)

    Hartmann, M.; Schneider, U.

    2014-03-01

    Second cancer risk in patients, in particular in children, who were treated with radiotherapy is an important side effect. It should be minimized by selecting an appropriate treatment plan for the patient. The objectives of this study were to integrate a risk model for radiation induced cancer into a treatment planning system which allows to judge different treatment plans with regard to second cancer induction and to quantify the potential reduction in predicted risk. A model for radiation induced cancer including fractionation effects which is valid for doses in the radiotherapy range was integrated into a treatment planning system. From the three-dimensional (3D) dose distribution the 3D-risk equivalent dose (RED) was calculated on an organ specific basis. In addition to RED further risk coefficients like OED (organ equivalent dose), EAR (excess absolute risk) and LAR (lifetime attributable risk) are computed. A risk model for radiation induced cancer was successfully integrated in a treatment planning system. Several risk coefficients can be viewed and used to obtain critical situations were a plan can be optimised. Risk-volume-histograms and organ specific risks were calculated for different treatment plans and were used in combination with NTCP estimates for plan evaluation. It is concluded that the integration of second cancer risk estimates in a commercial treatment planning system is feasible. It can be used in addition to NTCP modelling for optimising treatment plans which result in the lowest possible second cancer risk for a patient.

  1. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  2. Treatment Characteristics of Second Order Structure of Proteins Using Low-Pressure Oxygen RF Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayashi, Nobuya; Nakahigashi, Akari; Kawaguchi, Ryutaro

    2010-10-13

    Removal of proteins from the surface of medical equipments is attempted using oxygen plasma produced by RF discharge. FTIR spectra indicate that the bonding of C-H and N-H in the casein protein is reduced after irradiation of oxygen plasma. Also, the second order structure of a protein such as {alpha}-helix and {beta}-sheet are modified by the oxygen plasma. Complete removal of casein protein with the concentration of 0.016 mg/cm{sup 2} that is equivalent to remnants on the medical equipment requires two hours avoiding the damage to medical equipments.

  3. Oxidative Recession, Sulfur Release, and Al203 Spallation for Y-Doped Alloys

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2001-01-01

    Second-order spallation phenomena have been noted for Y-doped Rene'N5 after long term oxidation at 1150 degrees C. The reason for this behavior has not been conclusively identified. A mass equivalence analysis has shown that the surface recession resulting from oxidation has the potential of releasing about 0.15 monolayer of sulfur for every 1 mg/sq cm of oxygen reacted for an alloy containing 5 ppmw of sulfur. This amount is significant in comparison to levels that have been shown to result in first-order spallation behavior for undoped alloys. Oxidative recession is therefore speculated to be a contributing source of sulfur and second-order spallation for Y-doped alloys.

  4. Dynamic Monte Carlo description of thermal desorption processes

    NASA Astrophysics Data System (ADS)

    Weinketz, Sieghard

    1994-07-01

    The applicability of the dynamic Monte Carlo method of Fichthorn and Weinberg, in which the time evolution of a system is described in terms of the absolute number of different microscopic possible events and their associated transition rates, is discussed for the case of thermal desorption simulations. It is shown that the definition of the time increment at each successful event leads naturally to the macroscopic differential equation of desorption, in the case of simple first- and second-order processes in which the only possible events are desorption and diffusion. This equivalence is numerically demonstrated for a second-order case. In the sequence, the equivalence of this method with the Monte Carlo method of Sales and Zgrablich for more complex desorption processes, allowing for lateral interactions between adsorbates, is shown, even though the dynamic Monte Carlo method does not bear their limitation of a rapid surface diffusion condition, thus being able to describe a more complex ``kinetics'' of surface reactive processes, and therefore be applied to a wider class of phenomena, such as surface catalysis.

  5. Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks

    PubMed Central

    Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng

    2014-01-01

    Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408

  6. Exploiting Fractional Order PID Controller Methods in Improving the Performance of Integer Order PID Controllers: A GA Based Approach

    NASA Astrophysics Data System (ADS)

    Mukherjee, Bijoy K.; Metia, Santanu

    2009-10-01

    The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.

  7. Prediction of the turbulent wake with second-order closure

    NASA Technical Reports Server (NTRS)

    Taulbee, D. B.; Lumley, J. L.

    1981-01-01

    A turbulence was envisioned whose energy containing scales would be Gaussian in the absence of inhomogeneity, gravity, etc. An equation was constructed for a function equivalent to the probability density, the second moment of which corresponded to the accepted modeled form of the Reynolds stress equation. The third moment equations obtained from this were simplified by the assumption of weak inhomogeneity. Calculations are presented with this model as well as interpretations of the results.

  8. Perturbative Out of Equilibrium Quantum Field Theory beyond the Gradient Approximation and Generalized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Ozaki, H.

    2004-01-01

    Using the closed-time-path formalism, we construct perturbative frameworks, in terms of quasiparticle picture, for studying quasiuniform relativistic quantum field systems near equilibrium and non-equilibrium quasistationary systems. We employ the derivative expansion and take in up to the second-order term, i.e., one-order higher than the gradient approximation. After constructing self-energy resummed propagator, we formulated two kinds of mutually equivalent perturbative frameworks: The first one is formulated on the basis of the ``bare'' number density function, and the second one is formulated on the basis of ``physical'' number density function. In the course of construction of the second framework, the generalized Boltzmann equations directly come out, which describe the evolution of the system.

  9. Phantom-derived estimation of effective dose equivalent from X rays with and without a lead apron.

    PubMed

    Mateya, C F; Claycamp, H G

    1997-06-01

    Organ dose equivalents were measured in a humanoid phantom in order to estimate effective dose equivalent (H(E)) and effective dose (E) from low-energy x rays and in the presence or absence of a protective lead apron. Plane-parallel irradiation conditions were approximated using direct x-ray beams of 76 and 104 kVp and resulting dosimetry data was adjusted to model exposures conditions in fluoroscopy settings. Values of H(E) and E estimated under-shielded conditions were compared to the results of several recent studies that used combinations of measured and calculated dosimetry to model exposures to radiologists. While the estimates of H(E) and E without the lead apron were within 0.2 to 20% of expected values, estimates based on personal monitors worn at the (phantom) waist (underneath the apron) underestimated either H(E) or E while monitors placed at the neck (above the apron) significantly overestimated both quantities. Also, the experimentally determined H(E) and E were 1.4 to 3.3 times greater than might be estimated using recently reported "two-monitor" algorithms for the estimation of effective dose quantities. The results suggest that accurate estimation of either H(E) or E from personal monitors under conditions of partial body exposures remains problematic and is likely to require the use of multiple monitors.

  10. Causal dissipation for the relativistic dynamics of ideal gases

    NASA Astrophysics Data System (ADS)

    Freistühler, Heinrich; Temple, Blake

    2017-05-01

    We derive a general class of relativistic dissipation tensors by requiring that, combined with the relativistic Euler equations, they form a second-order system of partial differential equations which is symmetric hyperbolic in a second-order sense when written in the natural Godunov variables that make the Euler equations symmetric hyperbolic in the first-order sense. We show that this class contains a unique element representing a causal formulation of relativistic dissipative fluid dynamics which (i) is equivalent to the classical descriptions by Eckart and Landau to first order in the coefficients of viscosity and heat conduction and (ii) has its signal speeds bounded sharply by the speed of light. Based on these properties, we propose this system as a natural candidate for the relativistic counterpart of the classical Navier-Stokes equations.

  11. Causal dissipation for the relativistic dynamics of ideal gases

    PubMed Central

    2017-01-01

    We derive a general class of relativistic dissipation tensors by requiring that, combined with the relativistic Euler equations, they form a second-order system of partial differential equations which is symmetric hyperbolic in a second-order sense when written in the natural Godunov variables that make the Euler equations symmetric hyperbolic in the first-order sense. We show that this class contains a unique element representing a causal formulation of relativistic dissipative fluid dynamics which (i) is equivalent to the classical descriptions by Eckart and Landau to first order in the coefficients of viscosity and heat conduction and (ii) has its signal speeds bounded sharply by the speed of light. Based on these properties, we propose this system as a natural candidate for the relativistic counterpart of the classical Navier–Stokes equations. PMID:28588397

  12. Causal dissipation for the relativistic dynamics of ideal gases.

    PubMed

    Freistühler, Heinrich; Temple, Blake

    2017-05-01

    We derive a general class of relativistic dissipation tensors by requiring that, combined with the relativistic Euler equations, they form a second-order system of partial differential equations which is symmetric hyperbolic in a second-order sense when written in the natural Godunov variables that make the Euler equations symmetric hyperbolic in the first-order sense. We show that this class contains a unique element representing a causal formulation of relativistic dissipative fluid dynamics which (i) is equivalent to the classical descriptions by Eckart and Landau to first order in the coefficients of viscosity and heat conduction and (ii) has its signal speeds bounded sharply by the speed of light. Based on these properties, we propose this system as a natural candidate for the relativistic counterpart of the classical Navier-Stokes equations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ip, Hiu Yan; Schmidt, Fabian, E-mail: iphys@mpa-garching.mpg.de, E-mail: fabians@mpa-garching.mpg.de

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the 'separate universe' paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effectsmore » are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.« less

  14. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  16. Lag-One Autocorrelation in Short Series: Estimation and Hypotheses Testing

    ERIC Educational Resources Information Center

    Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta

    2010-01-01

    In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is…

  17. The motion of an Earth satellite after imposition of a non-holonomic third-order constraint

    NASA Astrophysics Data System (ADS)

    Dodonov, V. V.; Soltakhanov, Sh. Kh.; Yushkov, M. P.

    2018-05-01

    We consider the motion of an Earth satellite in the case when, starting from a certain instant of time, the magnitude of its acceleration remains unchanged. This requirement is equivalent to a second-order nonlinear non-holonomic constraint imposed to the satellite motion. The results of calculations are given for the motion of three Soviet satellites, two of which are located on highly elliptical orbits.

  18. Inverse sequential detection of parameter changes in developing time series

    NASA Technical Reports Server (NTRS)

    Radok, Uwe; Brown, Timothy J.

    1992-01-01

    Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.

  19. Modeling of second order space charge driven coherent sum and difference instabilities

    NASA Astrophysics Data System (ADS)

    Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo

    2017-10-01

    Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.

  20. Free energy reconstruction from steered dynamics without post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin

    2010-09-20

    Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less

  1. Comparison of day snorkeling, night snorkeling, and electrofishing to estimate bull trout abundance and size structure in a second-order Idaho stream

    Treesearch

    Russell F. Thurow; Daniel J. Schill

    1996-01-01

    Biologists lack sufficient information to develop protocols for sampling the abundance and size structure of bull trout Salvelinus confluentus. We compared summer estimates of the abundance and size structure of bull trout in a second-order central Idaho stream, derived by day snorkeling, night snorkeling, and electrofishing. We also examined the influence of water...

  2. On the Mathematical Modeling of Single and Multiple Scattering of Ultrasonic Guided Waves by Small Scatterers: A Structural Health Monitoring Measurement Model

    NASA Astrophysics Data System (ADS)

    Strom, Brandon William

    In an effort to assist in the paradigm shift from schedule based maintenance to conditioned based maintenance, we derive measurement models to be used within structural health monitoring algorithms. Our models are physics based, and use scattered Lamb waves to detect and quantify pitting corrosion. After covering the basics of Lamb waves and the reciprocity theorem, we develop a technique for the scattered wave solution. The first application is two-dimensional, and is employed in two different ways. The first approach integrates a traction distribution and replaces it by an equivalent force. The second approach is higher order and uses the actual traction distribution. We find that the equivalent force version of the solution technique holds well for small pits at low frequencies. The second application is three-dimensional. The equivalent force caused by the scattered wave of an arbitrary equivalent force is calculated. We obtain functions for the scattered wave displacements as a function of equivalent forces, equivalent forces as a function of incident wave, and scattered wave amplitudes as a function of incident amplitude. The third application uses self-consistency to derive governing equations for the scattered waves due to multiple corrosion pits. We decouple the implicit set of equations and solve explicitly by using a recursive series solution. Alternatively, we solve via an undetermined coefficient method which results in an interaction operator and solution via matrix inversion. The general solution is given for N pits including mode conversion. We show that the two approaches are equivalent, and give a solution for three pits. Various approximations are advanced to simplify the problem while retaining the leading order physics. As a final application, we use the multiple scattering model to investigate resonance of Lamb waves. We begin with a one-dimensional problem and progress to a three-dimensional problem. A directed graph enables interpretation of the interaction operator, and we show that a series solution converges due to loss of energy in the system. We see that there are four causes of resonance and plot the modulation depth as a function of spacing between the pits.

  3. Second-order singular pertubative theory for gravitational lenses

    NASA Astrophysics Data System (ADS)

    Alard, C.

    2018-03-01

    The extension of the singular perturbative approach to the second order is presented in this paper. The general expansion to the second order is derived. The second-order expansion is considered as a small correction to the first-order expansion. Using this approach, it is demonstrated that in practice the second-order expansion is reducible to a first order expansion via a re-definition of the first-order pertubative fields. Even if in usual applications the second-order correction is small the reducibility of the second-order expansion to the first-order expansion indicates a potential degeneracy issue. In general, this degeneracy is hard to break. A useful and simple second-order approximation is the thin source approximation, which offers a direct estimation of the correction. The practical application of the corrections derived in this paper is illustrated by using an elliptical NFW lens model. The second-order pertubative expansion provides a noticeable improvement, even for the simplest case of thin source approximation. To conclude, it is clear that for accurate modelization of gravitational lenses using the perturbative method the second-order perturbative expansion should be considered. In particular, an evaluation of the degeneracy due to the second-order term should be performed, for which the thin source approximation is particularly useful.

  4. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  6. Adjustment of activity coefficients as a function of changes in temperature, using the specific interaction theory

    NASA Astrophysics Data System (ADS)

    Giffaut, Eric; Vitorge, Pierre; Capdevila, Helene

    1994-10-01

    The aim of this work is to propose and to check approximations to calculate from only a few experimental measurements, ionic strength I and temperature T, influences on Gibbs' energy G, formal redox potential E and standard equilibrium constant K. Series expansions vs. T are first used: S and C(sub p)/2T (sup o) are typically the first- and second-order terms in -G. In the same way, -Delta H and T(exp 2) Delta C(sub p)/2 are the first- and second-order terms ofR ln K expansions vs. 1/T. This type of approximation is discussed for E of the M(4+)/M(3+), MO2(2+)/MO2(+) and MO2(CO3)3(4+)/MO2(CO3)3(4-)/MO2(CO3)3(4-) couples (M equivalent to U or Pu) measured from 5 to 70 C, for the standard Delta G of some solid U compounds, calculated from 17 to 117 C, and for Delta C(sub p), Delta G and log K of the CO2(aq)/HCO3(-) equilibrium from 0 to 150 C. Excess functions X(sup ex) are then calculated from activity coefficients gamma: enthalpy H or heat capacity C(sub p) adjustment as a function of I changes is needed only when the gamma adjustment as a function of T changes is needed. The variations in the specific interaction theory coefficient epsilon with T are small and roughly linear for the above redox equilibria and for the mean gamma of chloride electrolytes: first-order expansion seems enough to deduce epsilon, and then the excess functions G(sup ex), S(sup ex) and H(sup ex), in this T range; but second-order expansion is more consistent for estimation of C(sub p)(sup ex).

  7. Estimating three-demensional energy transfer in isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Li, K. S.; Helland, K. N.; Rosenblatt, M.

    1980-01-01

    To obtain an estimate of the spectral transfer function that indicates the rate of decay of energy, an x-wire probe was set at a fixed position, and two single wire probes were set at a number of locations in the same plane perpendicular to the mean flow in the wind tunnel. The locations of the single wire probes are determined by pseudo-random numbers (Monte Carlo). Second order spectra and cross spectra are estimated. The assumption of isotropy relative to second order spectra is examined. Third order spectra are also estimated corresponding to the positions specified. A Monte Carlo Fourier transformation of the downstream bispectra corresponding to integration across the plane perpendicular to the flow is carried out assuming isotropy. Further integration is carried out over spherical energy shells.

  8. Monetising the provision of informal long-term care by elderly people: estimates for European out-of-home caregivers based on the well-being valuation method.

    PubMed

    Schneider, Ulrike; Kleindienst, Julia

    2016-09-01

    Providing informal care can be both a burden and a source of satisfaction. To understand the welfare effect on caregivers, we need an estimate of the 'shadow value' of informal care, an imputed value for the non-market activity. We use data from the 2006-2007 Survey of Health Ageing and Retirement in Europe which offers the needed details on 29,471 individuals in Austria, Belgium, the Czech Republic, Denmark, France, Germany, Italy, the Netherlands, Poland, Spain, Sweden and Switzerland. Of these, 9768 are unpaid non-co-resident caregivers. To estimate net costs, we follow the subjective well-being valuation method, modelling respondents' life satisfaction as a product of informal care provision, income and personal characteristics, then expressing the relation between satisfaction and care as a monetary amount. We estimate a positive net effect of providing mode rate amounts of informal care, equivalent to €93 for an hour of care/week provided by a caregiver at the median income. The net effect appears to turn negative for greater high care burdens (over 30 hours/week). Interestingly, the effects of differences in care situation are at least an order of magnitude larger. We find that carers providing personal care are significantly more satisfied than those primarily giving help with housework, a difference equivalent to €811 a year at the median income. The article makes two unique contributions to knowledge. The first is its quantifying a net benefit to moderately time-intensive out-of-home caregivers. The second is its clear demonstration of the importance of heterogeneity of care burden on different subgroups. Care-giving context and specific activities matter greatly, pointing to the need for further work on targeting interventions at those caregivers most in need of them. © 2015 John Wiley & Sons Ltd.

  9. Demand for satellite-provided domestic communications services up to the year 2000

    NASA Technical Reports Server (NTRS)

    Stevenson, S.; Poley, W.; Lekan, J.; Salzman, J. A.

    1984-01-01

    Three fixed service telecommunications demand assessment studies were completed for NASA by The Western Union Telegraph Company and the U.S. Telephone and Telegraph Corporation. They provided forecasts of the total U.S. domestic demand, from 1980 to the year 2000, for voice, data, and video services. That portion that is technically and economically suitable for transmission by satellite systems, both large trunking systems and customer premises services (CPS) systems was also estimated. In order to provide a single set of forecasts a NASA synthesis of the above studies was conducted. The services, associated forecast techniques, and data bases employed by both contractors were examined, those elements of each judged to be the most appropriate were selected, and new forecasts were made. The demand for voice, data, and video services was first forecast in fundamental units of call-seconds, bits/year, and channels, respectively. Transmission technology characteristics and capabilities were then forecast, and the fundamental demand converted to an equivalent transmission capacity. The potential demand for satellite-provided services was found to grow by a factor of 6, from 400 to 2400 equivalent 36 MHz satellite transponders over the 20-year period. About 80 percent of this was found to be more appropriate for trunking systems and 20 percent CPS.

  10. Demand for satellite-provided domestic communications services up to the year 2000

    NASA Astrophysics Data System (ADS)

    Stevenson, S.; Poley, W.; Lekan, J.; Salzman, J. A.

    1984-11-01

    Three fixed service telecommunications demand assessment studies were completed for NASA by The Western Union Telegraph Company and the U.S. Telephone and Telegraph Corporation. They provided forecasts of the total U.S. domestic demand, from 1980 to the year 2000, for voice, data, and video services. That portion that is technically and economically suitable for transmission by satellite systems, both large trunking systems and customer premises services (CPS) systems was also estimated. In order to provide a single set of forecasts a NASA synthesis of the above studies was conducted. The services, associated forecast techniques, and data bases employed by both contractors were examined, those elements of each judged to be the most appropriate were selected, and new forecasts were made. The demand for voice, data, and video services was first forecast in fundamental units of call-seconds, bits/year, and channels, respectively. Transmission technology characteristics and capabilities were then forecast, and the fundamental demand converted to an equivalent transmission capacity. The potential demand for satellite-provided services was found to grow by a factor of 6, from 400 to 2400 equivalent 36 MHz satellite transponders over the 20-year period. About 80 percent of this was found to be more appropriate for trunking systems and 20 percent CPS.

  11. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  12. Heavy ion contributions to organ dose equivalent for the 1977 galactic cosmic ray spectrum

    NASA Astrophysics Data System (ADS)

    Walker, Steven A.; Townsend, Lawrence W.; Norbury, John W.

    2013-05-01

    Estimates of organ dose equivalents for the skin, eye lens, blood forming organs, central nervous system, and heart of female astronauts from exposures to the 1977 solar minimum galactic cosmic radiation spectrum for various shielding geometries involving simple spheres and locations within the Space Transportation System (space shuttle) and the International Space Station (ISS) are made using the HZETRN 2010 space radiation transport code. The dose equivalent contributions are broken down by charge groups in order to better understand the sources of the exposures to these organs. For thin shields, contributions from ions heavier than alpha particles comprise at least half of the organ dose equivalent. For thick shields, such as the ISS locations, heavy ions contribute less than 30% and in some cases less than 10% of the organ dose equivalent. Secondary neutron production contributions in thick shields also tend to be as large, or larger, than the heavy ion contributions to the organ dose equivalents.

  13. Assessing the Accuracy of Passive Microwave Estimates of Snow Water Equivalent in Data-Scarce Regions for Use in Water Resource Applications: A Case Study in the Upper Helmand Watershed, Afghanistan

    DTIC Science & Technology

    2011-03-01

    to remotely sensed SCA and SWE. The first analysis, a comparison to SCA imagery, tests the models ability to correctly estimate the snow extent...remotely sensed data (Con- galton and Green 2009). The producer’s accuracies consistently show the model underestimating the snow extent at the end...and K. Green. 2009. Assessing the accuracy of remotely sensed data: principals and practices, Second edition. CRC Press, Taylor & Francis Group

  14. Technical note: Equivalent genomic models with a residual polygenic effect.

    PubMed

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Estimation in SEM: A Concrete Example

    ERIC Educational Resources Information Center

    Ferron, John M.; Hess, Melinda R.

    2007-01-01

    A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…

  16. Second-harmonic generation in shear wave beams with different polarizations

    NASA Astrophysics Data System (ADS)

    Spratt, Kyle S.; Ilinskii, Yurii A.; Zabolotskaya, Evgenia A.; Hamilton, Mark F.

    2015-10-01

    A coupled pair of nonlinear parabolic equations was derived by Zabolotskaya [1] that model the transverse components of the particle motion in a collimated shear wave beam propagating in an isotropic elastic solid. Like the KZK equation, the parabolic equation for shear wave beams accounts consistently for the leading order effects of diffraction, viscosity and nonlinearity. The nonlinearity includes a cubic nonlinear term that is equivalent to that present in plane shear waves, as well as a quadratic nonlinear term that is unique to diffracting beams. The work by Wochner et al. [2] considered shear wave beams with translational polarizations (linear, circular and elliptical), wherein second-order nonlinear effects vanish and the leading order nonlinear effect is third-harmonic generation by the cubic nonlinearity. The purpose of the current work is to investigate the quadratic nonlinear term present in the parabolic equation for shear wave beams by considering second-harmonic generation in Gaussian beams as a second-order nonlinear effect using standard perturbation theory. In order for second-order nonlinear effects to be present, a broader class of source polarizations must be considered that includes not only the familiar translational polarizations, but also polarizations accounting for stretching, shearing and rotation of the source plane. It is found that the polarization of the second harmonic generated by the quadratic nonlinearity is not necessarily the same as the polarization of the source-frequency beam, and we are able to derive a general analytic solution for second-harmonic generation from a Gaussian source condition that gives explicitly the relationship between the polarization of the source-frequency beam and the polarization of the second harmonic.

  17. The equivalence of three techniques for estimating ground reflectance from LANDSAT digital count data

    NASA Technical Reports Server (NTRS)

    Richardson, A. J. (Principal Investigator)

    1983-01-01

    The equivalence of three separate investigations that related LANDSAT digital count (DC) to ground measured reflectance (R) was demonstrated. One investigator related DC data to the cosZ, where Z is the solar zenith angle, for surfaces of constant R. The second investigator corrected the DC data to the solar zenith angle of 39 degrees before relating to surface R. Both of these investigators used LANDSAT 1 and 2 data from overpass dates 1972 through 1977. A third investigator calculated the relation between DC and R based on atmospheric radiative transfer theory. The equation coefficients obtained from these three investigators for all four LANDSAT MSS bands were shown to be equivalent although differences in ground reflectance measurement procedures have created coefficient variations among the three investigations. These relations should be useful for testing atmospheric radiative transfer theory.

  18. Stray radiation dose and second cancer risk for a pediatric patient receiving craniospinal irradiation with proton beams

    PubMed Central

    Taddei, Phillip J; Mirkovic, Dragan; Fontenot, Jonas D; Giebeler, Annelise; Zheng, Yuanshui; Kornguth, David; Mohan, Radhe; Newhauser, Wayne D

    2014-01-01

    Proton beam radiotherapy unavoidably exposes healthy tissue to stray radiation emanating from the treatment unit and secondary radiation produced within the patient. These exposures provide no known benefit and may increase a patient's risk of developing a radiogenic cancer. The aims of this study were to calculate doses to major organs and tissues and to estimate second cancer risk from stray radiation following craniospinal irradiation (CSI) with proton therapy. This was accomplished using detailed Monte Carlo simulations of a passive-scattering proton treatment unit and a voxelized phantom to represent the patient. Equivalent doses, effective dose and corresponding risk for developing a fatal second cancer were calculated for a 10-year-old boy who received proton therapy. The proton treatment comprised CSI at 30.6 Gy plus a boost of 23.4 Gy to the clinical target volume. The predicted effective dose from stray radiation was 418 mSv, of which 344 mSv was from neutrons originating outside the patient; the remaining 74 mSv was caused by neutrons originating within the patient. This effective dose corresponds to an attributable lifetime risk of a fatal second cancer of 3.4%. The equivalent doses that predominated the effective dose from stray radiation were in the lungs, stomach and colon. These results establish a baseline estimate of the stray radiation dose and corresponding risk for a pediatric patient undergoing proton CSI and support the suitability of passively-scattered proton beams for the treatment of central nervous system tumors in pediatric patients. PMID:19305045

  19. Hydrogen-Bonding Catalysis and Inhibition by Simple Solvents in the Stereoselective Kinetic Epoxide-Opening Spirocyclization of Glycal Epoxides to Form Spiroketals

    PubMed Central

    Wurst, Jacqueline M.; Liu, Guodong; Tan, Derek S.

    2011-01-01

    Mechanistic investigations of a MeOH-induced kinetic epoxide-opening spirocyclization of glycal epoxides have revealed dramatic, specific roles for simple solvents in hydrogen-bonding catalysis of this reaction to form spiroketal products stereoselectively with inversion of configuration at the anomeric carbon. A series of electronically-tuned C1-aryl glycal epoxides was used to study the mechanism of this reaction based on differential reaction rates and inherent preferences for SN2 versus SN1 reaction manifolds. Hammett analysis of reaction kinetics with these substrates is consistent with an SN2 or SN2-like mechanism (ρ = −1.3 vs. ρ = −5.1 for corresponding SN1 reactions of these substrates). Notably, the spirocyclization reaction is second-order dependent on MeOH and the glycal ring oxygen is required for second-order MeOH catalysis. However, acetone cosolvent is a first-order inhibitor of the reaction. A transition state consistent with the experimental data is proposed in which one equivalent of MeOH activates the epoxide electrophile via a hydrogen bond while a second equivalent of MeOH chelates the sidechain nucleophile and glycal ring oxygen. A paradoxical previous observation that decreased MeOH concentration leads to increased competing intermolecular methyl glycoside formation is resolved by the finding that this side reaction is only first-order dependent on MeOH. This study highlights the unusual abilities of simple solvents to act as hydrogen-bonding catalysts and inhibitors in epoxide-opening reactions, providing both stereoselectivity and discrimination between competing reaction manifolds. This spirocyclization reaction provides efficient, stereocontrolled access to spiroketals that are key structural motifs in natural products. PMID:21539313

  20. A model for dispersion from area sources in convective turbulence. [for air pollution

    NASA Technical Reports Server (NTRS)

    Crane, G.; Panofsky, H. A.; Zeman, O.

    1977-01-01

    Four independent estimates of the vertical distribution of the eddy coefficient for dispersion of a passive contaminant from an extensive area source in a convective layer have been presented. The estimates were based on the following methods: (1) a second-order closure prediction, (2) field data of pollutant concentrations over Los Angeles, (3) lab measurements of particle dispersion, and (4) assumption of equality between momentum and mass transfer coefficients in the free convective limit. It is suggested that K-values estimated both from second-order closure theory and from Los Angeles measurements are systematically underestimated.

  1. Hermitian Hamiltonian equivalent to a given non-Hermitian one: manifestation of spectral singularity.

    PubMed

    Samsonov, Boris F

    2013-04-28

    One of the simplest non-Hermitian Hamiltonians, first proposed by Schwartz in 1960, that may possess a spectral singularity is analysed from the point of view of the non-Hermitian generalization of quantum mechanics. It is shown that the η operator, being a second-order differential operator, has supersymmetric structure. Asymptotic behaviour of the eigenfunctions of a Hermitian Hamiltonian equivalent to the given non-Hermitian one is found. As a result, the corresponding scattering matrix and cross section are given explicitly. It is demonstrated that the possible presence of a spectral singularity in the spectrum of the non-Hermitian Hamiltonian may be detected as a resonance in the scattering cross section of its Hermitian counterpart. Nevertheless, just at the singular point, the equivalent Hermitian Hamiltonian becomes undetermined.

  2. Evaluation of the kinetic oxidation of aqueous volatile organic compounds by permanganate.

    PubMed

    Mahmoodlu, Mojtaba G; Hassanizadeh, S Majid; Hartog, Niels

    2014-07-01

    The use of permanganate solutions for in-situ chemical oxidation (ISCO) is a well-established groundwater remediation technology, particularly for targeting chlorinated ethenes. The kinetics of oxidation reactions is an important ISCO remediation design aspect that affects the efficiency and oxidant persistence. The overall rate of the ISCO reaction between oxidant and contaminant is typically described using a second-order kinetic model while the second-order rate constant is determined experimentally by means of a pseudo first order approach. However, earlier studies of chlorinated hydrocarbons have yielded a wide range of values for the second-order rate constants. Also, there is limited insight in the kinetics of permanganate reactions with fuel-derived groundwater contaminants such as toluene and ethanol. In this study, batch experiments were carried out to investigate and compare the oxidation kinetics of aqueous trichloroethylene (TCE), ethanol, and toluene in an aqueous potassium permanganate solution. The overall second-order rate constants were determined directly by fitting a second-order model to the data, instead of typically using the pseudo-first-order approach. The second-order reaction rate constants (M(-1) s(-1)) for TCE, toluene, and ethanol were 8.0×10(-1), 2.5×10(-4), and 6.5×10(-4), respectively. Results showed that the inappropriate use of the pseudo-first-order approach in several previous studies produced biased estimates of the second-order rate constants. In our study, this error was expressed as a function of the extent (P/N) in which the reactant concentrations deviated from the stoichiometric ratio of each oxidation reaction. The error associated with the inappropriate use of the pseudo-first-order approach is negatively correlated with the P/N ratio and reached up to 25% of the estimated second-order rate constant in some previous studies of TCE oxidation. Based on our results, a similar relation is valid for the other volatile organic compounds studied. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. 7 CFR 58.236 - Pasteurization and heat treatment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... condensing at a minimum temperature of 161 °F. for at least 15 seconds or its equivalent in bacterial.... for 15 seconds or its equivalent in bacterial destruction. (2) All buttermilk to be used in the... temperature of 161 °F for 15 seconds or its equivalent in bacterial destruction. (b) Heat treatment—(1) High...

  4. Discrete Kalman filtering equations of second-order form for control-structure interaction simulations

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Alvin, K. F.; Belvin, W. Keith

    1991-01-01

    A second-order form of discrete Kalman filtering equations is proposed as a candidate state estimator for efficient simulations of control-structure interactions in coupled physical coordinate configurations as opposed to decoupled modal coordinates. The resulting matrix equation of the present state estimator consists of the same symmetric, sparse N x N coupled matrices of the governing structural dynamics equations as opposed to unsymmetric 2N x 2N state space-based estimators. Thus, in addition to substantial computational efficiency improvement, the present estimator can be applied to control-structure design optimization for which the physical coordinates associated with the mass, damping and stiffness matrices of the structure are needed instead of modal coordinates.

  5. Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao

    2017-11-01

    Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.

  6. Nonlinear estimation theory applied to the interplanetary orbit determination problem.

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1972-01-01

    Martingale theory and appropriate smoothing properties of Loeve (1953) have been used to develop a modified Gaussian second-order filter. The performance of the filter is evaluated through numerical simulation of a Jupiter flyby mission. The observations used in the simulation are on-board measurements of the angle between Jupiter and a fixed star taken at discrete time intervals. In the numerical study, the influence of each of the second-order terms is evaluated. Five filter algorithms are used in the simulations. Four of the filters are the modified Gaussian second-order filter and three approximations derived by neglecting one or more of the second-order terms in the equations. The fifth filter is the extended Kalman-Bucy filter which is obtained by neglecting all of the second-order terms.

  7. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    NASA Astrophysics Data System (ADS)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  8. Self-acceleration and matter content in bicosmology from Noether symmetries

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Capozziello, Salvatore; Martín-Moruno, Prado

    2018-04-01

    In bigravity, when taking into account the potential existence of matter fields minimally coupled to the second gravitation sector, the dynamics of our Universe depends on some matter that cannot be observed in a direct way. In this paper, we assume the existence of a Noether symmetry in bigravity cosmologies in order to constrain the dynamics of that matter. By imposing this assumption we obtain cosmological models with interesting phenomenology. In fact, considering that our universe is filled with standard matter and radiation, we show that the existence of a Noether symmetry implies that either the dynamics of the second sector decouples, being the model equivalent to general relativity (GR), or the cosmological evolution of our universe tends to a de Sitter state with the vacuum energy in it given by the conserved quantity associated with the symmetry. The physical consequences of the genuine bigravity models obtained are briefly discussed. We also point out that the first model, which is equivalent to GR, may be favored due to the potential appearance of instabilities in the second model.

  9. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  10. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  11. Transformation of body force localized near the surface of a half-space into equivalent surface stresses.

    PubMed

    Rouge, Clémence; Lhémery, Alain; Ségur, Damien

    2013-10-01

    An electromagnetic acoustic transducer (EMAT) or a laser used to generate elastic waves in a component is often described as a source of body force confined in a layer close to the surface. On the other hand, models for elastic wave radiation more efficiently handle sources described as distributions of surface stresses. Equivalent surface stresses can be obtained by integrating the body force with respect to depth. They are assumed to generate the same field as the one that would be generated by the body force. Such an integration scheme can be applied to Lorentz force for conventional EMAT configuration. When applied to magnetostrictive force generated by an EMAT in a ferromagnetic material, the same scheme fails, predicting a null stress. Transforming body force into equivalent surface stresses therefore, requires taking into account higher order terms of the force moments, the zeroth order being the simple force integration over the depth. In this paper, such a transformation is derived up to the second order, assuming that body forces are localized at depths shorter than the ultrasonic wavelength. Two formulations are obtained, each having some advantages depending on the application sought. They apply regardless of the nature of the force considered.

  12. Vibrational spectroscopy and microscopic imaging: novel approaches for comparing barrier physical properties in native and human skin equivalents.

    PubMed

    Yu, Guo; Zhang, Guojin; Flach, Carol R; Mendelsohn, Richard

    2013-06-01

    Vibrational spectroscopy and imaging have been used to compare barrier properties in human skin, porcine skin, and two human skin equivalents, Epiderm 200X with an enhanced barrier and Epiderm 200 with a normal barrier. Three structural characterizations were performed. First, chain packing and conformational order were compared in isolated human stratum corneum (SC), isolated porcine SC, and in the Epiderm 200X surface layers. The infrared (IR) spectrum of isolated human SC revealed a large proportion of orthorhombically packed lipid chains at physiological temperatures along with a thermotropic phase transition to a state with hexagonally packed chains. In contrast, the lipid phase at physiological temperatures in both porcine SC and in Epiderm 200X, although dominated by conformationally ordered chains, lacked significant levels of orthorhombic subcell packing. Second, confocal Raman imaging of cholesterol bands showed extensive formation of cholesterol-enriched pockets within the human skin equivalents (HSEs). Finally, IR imaging tracked lipid barrier dimensions as well as the spatial disposition of ordered lipids in human SC and Epiderm 200X. These approaches provide a useful set of experiments for exploring structural differences between excised human skin and HSEs, which in turn may provide a rationale for the functional differences observed among these preparations.

  13. Structure and Stability of One-Dimensional Detonations in Ethylene-Air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.; Perkins, High D. (Technical Monitor)

    2003-01-01

    The propagation of one-dimensional detonations in ethylene-air mixtures is investigated numerically by solving the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing scheme and a point implicit, first-order-accurate, time marching algorithm. The ethylene-air combustion is modeled with a 20-species, 36-step reaction mechanism. A multi-level, dynamically adaptive grid is utilized, in order to resolve the structure of the detonation. Parametric studies over an equivalence ratio range of 0.5 less than phi less than 3 for different initial pressures and degrees of detonation overdrive demonstrate that the detonation is unstable for low degrees of overdrive, but the dynamics of wave propagation varies with fuel-air equivalence ratio. For equivalence ratios less than approximately 1.2 the detonation exhibits a short-period oscillatory mode, characterized by high-frequency, low-amplitude waves. Richer mixtures (phi greater than 1.2) exhibit a low-frequency mode that includes large fluctuations in the detonation wave speed; that is, a galloping propagation mode is established. At high degrees of overdrive, stable detonation wave propagation is obtained. A modified McVey-Toong short-period wave-interaction theory is in excellent agreement with the numerical simulations.

  14. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  15. Estimation of Handling Qualities Parameters of the Tu-144 Supersonic Transport Aircraft from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Curry, Timothy J.; Batterson, James G. (Technical Monitor)

    2000-01-01

    Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.

  16. A Novel Approach for Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Chen, Ya-Chin; Juang, Jer-Nan

    1998-01-01

    Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method.

  17. Space Vehicle Guidance, Navigation, Control, and Estimation Operations Technologies

    DTIC Science & Technology

    2018-03-29

    angular position around the ellipse, and the out-of-place amplitude and angular position. These elements are explicitly relatable to the six rectangular...quasi) second order relative orbital elements are explored. One theory uses the expanded solution form and introduces several instantaneous ellipses...In each case, the theory quantifies distortion of the first order relative orbital elements when including second order effects. The new variables are

  18. Green operators for low regularity spacetimes

    NASA Astrophysics Data System (ADS)

    Sanchez Sanchez, Yafet; Vickers, James

    2018-02-01

    In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.

  19. Weyl gravity revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Álvarez, Enrique; González-Martín, Sergio, E-mail: enrique.alvarez@uam.es, E-mail: sergio.gonzalez.martin@csic.es

    2017-02-01

    The on shell equivalence of first order and second order formalisms for the Einstein-Hilbert action does not hold for those actions quadratic in curvature. It would seem that by considering the connection and the metric as independent dynamical variables, there are no quartic propagators for any dynamical variable. This suggests that it is possible to get both renormalizability and unitarity along these lines. We have studied a particular instance of those theories, namely Weyl gravity. In this first paper we show that it is not possible to implement this program with the Weyl connection alone.

  20. Application of Hermitian time-dependent coupled-cluster response Ansätze of second order to excitation energies and frequency-dependent dipole polarizabilities

    NASA Astrophysics Data System (ADS)

    Wälz, Gero; Kats, Daniel; Usvyat, Denis; Korona, Tatiana; Schütz, Martin

    2012-11-01

    Linear-response methods, based on the time-dependent variational coupled-cluster or the unitary coupled-cluster model, and truncated at the second order according to the Møller-Plesset partitioning, i.e., the TD-VCC[2] and TD-UCC[2] linear-response methods, are presented and compared. For both of these methods a Hermitian eigenvalue problem has to be solved to obtain excitation energies and state eigenvectors. The excitation energies thus are guaranteed always to be real valued, and the eigenvectors are mutually orthogonal, in contrast to response theories based on “traditional” coupled-cluster models. It turned out that the TD-UCC[2] working equations for excitation energies and polarizabilities are equivalent to those of the second-order algebraic diagrammatic construction scheme ADC(2). Numerical tests are carried out by calculating TD-VCC[2] and TD-UCC[2] excitation energies and frequency-dependent dipole polarizabilities for several test systems and by comparing them to the corresponding values obtained from other second- and higher-order methods. It turns out that the TD-VCC[2] polarizabilities in the frequency regions away from the poles are of a similar accuracy as for other second-order methods, as expected from the perturbative analysis of the TD-VCC[2] polarizability expression. On the other hand, the TD-VCC[2] excitation energies are systematically too low relative to other second-order methods (including TD-UCC[2]). On the basis of these results and an analysis presented in this work, we conjecture that the perturbative expansion of the Jacobian converges more slowly for the TD-VCC formalism than for TD-UCC or for response theories based on traditional coupled-cluster models.

  1. Prevalence of consanguineous marriages in Syria.

    PubMed

    Othman, Hasan; Saadat, Mostafa

    2009-09-01

    Consanguineous marriage is the union of individuals having at least one common ancestor. The present cross-sectional study was done in order to illustrate the prevalence and types of consanguineous marriages in the Syrian Arab Republic. Data on consanguineous marriages were collected using a simple questionnaire. The total number of couples in this study was 67,958 (urban areas: 36,574 couples; rural areas: 31,384 couples) from the following provinces: Damascus, Hamah, Tartous, Latakia, Al Raqa, Homs, Edlep and Aleppo. In each province urban and rural areas were surveyed. Consanguineous marriage was classified by the degree of relationship between couples: double first cousins (F=1/8), first cousins (F=1/16), second cousins (F=1/64) and beyond second cousins (F<1/64). The coefficient of inbreeding (F) was calculated for each couple and the mean coefficient of inbreeding (alpha) estimated for the population of each province, stratified by rural and urban areas. The results showed that the overall frequency of consanguinity was 30.3% in urban and 39.8% in rural areas. Total rate of consanguinity was found to be 35.4%. The equivalent mean inbreeding coefficient (alpha) was 0.0203 and 0.0265 in urban and rural areas, respectively. The mean proportion of consanguineous marriages ranged from 67.5% in Al Raqa province to 22.1% in Latakia province. The alpha-value ranged from 0.0358 to 0.0127 in these two provinces, respectively. The western and north-western provinces (including Tartous, Lattakia and Edlep) recorded lower levels of inbreeding than the central, northern and southern provinces. The overall alpha-value was estimated to be about 0.0236 for the studied populations. First cousin marriages (with 20.9%) were the most common type of consanguineous marriages, followed by double first cousin (with 7.8%) and second cousin marriages (with 3.3%), and beyond second cousin was the least common type.

  2. The performance of the new enhanced-resolution satellite passive microwave dataset applied for snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Jiang, L.; Liu, D.

    2017-12-01

    The newly-processed NASA MEaSures Calibrated Enhanced-Resolution Brightness Temperature (CETB) reconstructed using antenna measurement response function (MRF) is considered to have significantly improved fine-resolution measurements with better georegistration for time-series observations and equivalent field of view (FOV) for frequencies with the same monomial spatial resolution. We are looking forward to its potential for the global snow observing purposes, and therefore aim to test its performance for characterizing snow properties, especially the snow water equivalent (SWE) in large areas. In this research, two candidate SWE algorithms will be tested in China for the years between 2005 to 2010 using the reprocessed TB from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), with the results to be evaluated using the daily snow depth measurements at over 700 national synoptic stations. One of the algorithms is the SWE retrieval algorithm used for the FengYun (FY) - 3 Microwave Radiation Imager. This algorithm uses the multi-channel TB to calculate SWE for three major snow regions in China, with the coefficients adapted for different land cover types. The second algorithm is the newly-established Bayesian Algorithm for SWE Estimation with Passive Microwave measurements (BASE-PM). This algorithm uses the physically-based snow radiative transfer model to find the histogram of most-likely snow property that matches the multi-frequency TB from 10.65 to 90 GHz. It provides a rough estimation of snow depth and grain size at the same time and showed a 30 mm SWE RMS error using the ground radiometer measurements at Sodankyla. This study will be the first attempt to test it spatially for satellite. The use of this algorithm benefits from the high resolution and the spatial consistency between frequencies embedded in the new dataset. This research will answer three questions. First, to what extent can CETB increase the heterogeneity in the mapped SWE? Second, will the SWE estimation error statistics be improved using this high-resolution dataset? Third, how will the SWE retrieval accuracy be improved using CETB and the new SWE retrieval techniques?

  3. Preliminary ex vivo feasibility study on targeted cell surgery by high intensity focused ultrasound (HIFU).

    PubMed

    Wang, Zhi Biao; Wu, Junru; Fang, Liao Qiong; Wang, Hua; Li, Fa Qi; Tian, Yun Bo; Gong, Xiao Bo; Zhang, Hong; Zhang, Lian; Feng, Ruo

    2011-04-01

    High intensity focused ultrasound (HIFU) has become a new noninvasive surgical modality in medicine. A portion of tissue seated inside a patient's body may experience coagulative necrosis after a few seconds of insonification by high intensity focused ultrasound (US) generated by an extracorporeal focusing US transducer. The region of tissue affected by coagulative necrosis (CN) usually has an ellipsoidal shape when the thermal effect due to US absorption plays the dominant role. Its long and short axes are parallel and perpendicular to the US propagation direction respectively. It was shown by numerical computations using a nonlinear Gaussian beams model to describe the sound field in a focal zone and ex vivo experiments that the dimension of the short and long axes of the tissue which experiences CN can be as small as 50μm and 250μm respectively after one second exposure of US pulse (the spatial and pulse average acoustic power is on the order of tens of Watts and the local acoustic spatial and temporal pulse averaged intensity is on the order of 3×10(4)W/cm(2)) generated by a 1.6MHz HIFU transducer of 12cm diameter and 11cm geometric focal length (f-number=0.92). The concept of thermal dose of cumulative equivalent minutes was used to describe the possible tissue coagulative necrosis generated by HIFU. The numbers of cells which suffered CN were estimated to be on the order of 40. This result suggests that HIFU is able to interact with tens of cells at/near its focal zone while keeping the neighboring cells minimally affected, and thus the targeted cell surgery may be achievable. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. A conformal mapping based fractional order approach for sub-optimal tuning of PID controllers with guaranteed dominant pole placement

    NASA Astrophysics Data System (ADS)

    Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava

    2012-09-01

    A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, Andrew R.; Ghiviriga, Ion; Abboud, Khalil A.

    This report outlines the investigation of the iClick mechanism between gold(I)-azides and gold(I)-acetylides to yield digold triazolates. Isolation of digold triazolate complexes offer compelling support for the role of two copper(I) ions in CuAAC. In addition, a kinetic investigation reveals the reaction is first order in both Au(I)-N 3 and Au(I)-C≡C-R equivalent to C-R, thus second order overall. A Hammett plot with a ρ = 1.02(5) signifies electron-withdrawing groups accelerate the cycloaddition by facilitating the coordination of the second gold ion in a π-complex. Rate inhibition by the addition of free triphenylphosphine to the reaction indicates that ligand dissociation ismore » a prerequisite for the reaction. The mechanistic conclusions mirror those proposed for the CuAAC reaction.« less

  6. Double gate impact ionization MOS transistor: Proposal and investigation

    NASA Astrophysics Data System (ADS)

    Yang, Zhaonian; Zhang, Yue; Yang, Yuan; Yu, Ningmei

    2017-02-01

    In this paper, a double gate impact ionization MOS (DG-IMOS) transistor with improved performance is proposed and investigated by TCAD simulation. In the proposed design, a second gate is introduced in a conventional impact ionization MOS (IMOS) transistor that lengthens the equivalent channel length and suppresses the band-to-band tunneling. The OFF-state leakage current is reduced by over four orders of magnitude. At the ON-state, the second gate is negatively biased in order to enhance the electric field in the intrinsic region. As a result, the operating voltage does not increase with the increase in the channel length. The simulation result verifies that the proposed DG-IMOS achieves a better switching characteristic than the conventional is achieved. Lastly, the application of the DG-IMOS is discussed theoretically.

  7. On-line adaptive battery impedance parameter and state estimation considering physical principles in reduced order equivalent circuit battery models part 2. Parameter and state estimation

    NASA Astrophysics Data System (ADS)

    Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe

    2014-09-01

    Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.

  8. A New Method for Estimating Bacterial Abundances in Natural Samples using Sublimation

    NASA Technical Reports Server (NTRS)

    Glavin, Daniel P.; Cleaves, H. James; Schubert, Michael; Aubrey, Andrew; Bada, Jeffrey L.

    2004-01-01

    We have developed a new method based on the sublimation of adenine from Escherichia coli to estimate bacterial cell counts in natural samples. To demonstrate this technique, several types of natural samples including beach sand, seawater, deep-sea sediment, and two soil samples from the Atacama Desert were heated to a temperature of 500 C for several seconds under reduced pressure. The sublimate was collected on a cold finger and the amount of adenine released from the samples then determined by high performance liquid chromatography (HPLC) with UV absorbance detection. Based on the total amount of adenine recovered from DNA and RNA in these samples, we estimated bacterial cell counts ranging from approx. l0(exp 5) to l0(exp 9) E. coli cell equivalents per gram. For most of these samples, the sublimation based cell counts were in agreement with total bacterial counts obtained by traditional DAPI staining. The simplicity and robustness of the sublimation technique compared to the DAPI staining method makes this approach particularly attractive for use by spacecraft instrumentation. NASA is currently planning to send a lander to Mars in 2009 in order to assess whether or not organic compounds, especially those that might be associated with life, are present in Martian surface samples. Based on our analyses of the Atacama Desert soil samples, several million bacterial cells per gam of Martian soil should be detectable using this sublimation technique.

  9. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    PubMed

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  10. Quantum-state reconstruction by maximizing likelihood and entropy.

    PubMed

    Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk

    2011-07-08

    Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.

  11. Equal-Curvature X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Zhang, William

    2002-01-01

    We introduce a new type of x-ray telescope design; an Equal-Curvature telescope. We simply add a second order axial sag to the base grazing incidence cone-cone telescope. The radius of curvature of the sag terms is the same on the primary surface and on the secondary surface. The design is optimized so that the on-axis image spot at the focal plane is minimized. The on-axis RMS (root mean square) spot diameter of two studied telescopes is less than 0.2 arc-seconds. The off-axis performance is comparable to equivalent Wolter type 1 telescopes.

  12. Estimation of local scale dispersion from local breakthrough curves during a tracer test in a heterogeneous aquifer: the Lagrangian approach.

    PubMed

    Vanderborght, Jan; Vereecken, Harry

    2002-01-01

    The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.

  13. Spatio-temporal variability of snow water equivalent in the extra-tropical Andes Cordillera from distributed energy balance modeling and remotely sensed snow cover

    NASA Astrophysics Data System (ADS)

    Cornwell, E.; Molotch, N. P.; McPhee, J.

    2016-01-01

    Seasonal snow cover is the primary water source for human use and ecosystems along the extratropical Andes Cordillera. Despite its importance, relatively little research has been devoted to understanding the properties, distribution and variability of this natural resource. This research provides high-resolution (500 m), daily distributed estimates of end-of-winter and spring snow water equivalent over a 152 000 km2 domain that includes the mountainous reaches of central Chile and Argentina. Remotely sensed fractional snow-covered area and other relevant forcings are combined with extrapolated data from meteorological stations and a simplified physically based energy balance model in order to obtain melt-season melt fluxes that are then aggregated to estimate the end-of-winter (or peak) snow water equivalent (SWE). Peak SWE estimates show an overall coefficient of determination R2 of 0.68 and RMSE of 274 mm compared to observations at 12 automatic snow water equivalent sensors distributed across the model domain, with R2 values between 0.32 and 0.88. Regional estimates of peak SWE accumulation show differential patterns strongly modulated by elevation, latitude and position relative to the continental divide. The spatial distribution of peak SWE shows that the 4000-5000 m a.s.l. elevation band is significant for snow accumulation, despite having a smaller surface area than the 3000-4000 m a.s.l. band. On average, maximum snow accumulation is observed in early September in the western Andes, and in early October on the eastern side of the continental divide. The results presented here have the potential of informing applications such as seasonal forecast model assessment and improvement, regional climate model validation, as well as evaluation of observational networks and water resource infrastructure development.

  14. Dispersive wave propagation in two-dimensional rigid periodic blocky materials with elastic interfaces

    NASA Astrophysics Data System (ADS)

    Bacigalupo, Andrea; Gambarotta, Luigi

    2017-05-01

    Dispersive waves in two-dimensional blocky materials with periodic microstructure made up of equal rigid units, having polygonal centro-symmetric shape with mass and gyroscopic inertia, connected with each other through homogeneous linear interfaces, have been analyzed. The acoustic behavior of the resulting discrete Lagrangian model has been obtained through a Floquet-Bloch approach. From the resulting eigenproblem derived by the Euler-Lagrange equations for harmonic wave propagation, two acoustic branches and an optical branch are obtained in the frequency spectrum. A micropolar continuum model to approximate the Lagrangian model has been derived based on a second-order Taylor expansion of the generalized macro-displacement field. The constitutive equations of the equivalent micropolar continuum have been obtained, with the peculiarity that the positive definiteness of the second-order symmetric tensor associated to the curvature vector is not guaranteed and depends both on the ratio between the local tangent and normal stiffness and on the block shape. The same results have been obtained through an extended Hamiltonian derivation of the equations of motion for the equivalent continuum that is related to the Hill-Mandel macro homogeneity condition. Moreover, it is shown that the hermitian matrix governing the eigenproblem of harmonic wave propagation in the micropolar model is exact up to the second order in the norm of the wave vector with respect to the same matrix from the discrete model. To appreciate the acoustic behavior of some relevant blocky materials and to understand the reliability and the validity limits of the micropolar continuum model, some blocky patterns have been analyzed: rhombic and hexagonal assemblages and running bond masonry. From the results obtained in the examples, the obtained micropolar model turns out to be particularly accurate to describe dispersive functions for wavelengths greater than 3-4 times the characteristic dimension of the block. Finally, in consideration that the positive definiteness of the second order elastic tensor of the micropolar model is not guaranteed, the hyperbolicity of the equation of motion has been investigated by considering the Legendre-Hadamard ellipticity conditions requiring real values for the wave velocity.

  15. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  16. Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Clelland, Jeanne Nielsen

    1996-08-01

    I consider the problem of computing the space of conservation laws for a second-order, parabolic partial differential equation for one function of three independent variables. The PDE is formulated as an exterior differential system {cal I} on a 12 -manifold M, and its conservation laws are identified with the vector space of closed 3-forms in the infinite prolongation of {cal I} modulo the so -called "trivial" conservation laws. I use the tools of exterior differential systems and Cartan's method of equivalence to study the structure of the space of conservation laws. My main result is:. Theorem. Any conservation law for a second-order, parabolic PDE for one function of three independent variables can be represented by a closed 3-form in the differential ideal {cal I} on the original 12-manifold M. I show that if a nontrivial conservation law exists, then {cal I} has a deprolongation to an equivalent system {cal J} on a 7-manifold N, and any conservation law for {cal I} can be expressed as a closed 3-form on N which lies in {cal J}. Furthermore, any such system in the real analytic category is locally equivalent to a system generated by a (parabolic) equation of the formA(u _{xx}u_{yy}-u_sp {xy}{2}) + B_1u_{xx }+2B_2u_{xy} +B_3u_ {yy}+C=0crwhere A, B_{i}, C are functions of x, y, t, u, u_{x}, u _{y}, u_{t}. I compute the space of conservation laws for several examples, and I begin the process of analyzing the general case using Cartan's method of equivalence. I show that the non-linearizable equation u_{t} = {1over2}e ^{-u}(u_{xx}+u_ {yy})has an infinite-dimensional space of conservation laws. This stands in contrast to the two-variable case, for which Bryant and Griffiths showed that any equation whose space of conservation laws has dimension 4 or more is locally equivalent to a linear equation, i.e., is linearizable.

  17. Sachs' free data in real connection variables

    NASA Astrophysics Data System (ADS)

    De Paoli, Elena; Speziale, Simone

    2017-11-01

    We discuss the Hamiltonian dynamics of general relativity with real connection variables on a null foliation, and use the Newman-Penrose formalism to shed light on the geometric meaning of the various constraints. We identify the equivalent of Sachs' constraint-free initial data as projections of connection components related to null rotations, i.e. the translational part of the ISO(2) group stabilising the internal null direction soldered to the hypersurface. A pair of second-class constraints reduces these connection components to the shear of a null geodesic congruence, thus establishing equivalence with the second-order formalism, which we show in details at the level of symplectic potentials. A special feature of the first-order formulation is that Sachs' propagating equations for the shear, away from the initial hypersurface, are turned into tertiary constraints; their role is to preserve the relation between connection and shear under retarded time evolution. The conversion of wave-like propagating equations into constraints is possible thanks to an algebraic Bianchi identity; the same one that allows one to describe the radiative data at future null infinity in terms of a shear of a (non-geodesic) asymptotic null vector field in the physical spacetime. Finally, we compute the modification to the spin coefficients and the null congruence in the presence of torsion.

  18. Consanguineous marriages in Afghanistan.

    PubMed

    Saify, Khyber; Saadat, Mostafa

    2012-01-01

    The present cross-sectional study was done in order to illustrate the prevalence and types of consanguineous marriages among Afghanistan populations. Data on types of marriages were collected using a simple questionnaire. The total number of couples in the study was 7140 from the following provinces: Badakhshan, Baghlan, Balkh, Bamyan, Kabul, Kunduz, Samangan and Takhar. Consanguineous marriages were classified by the degree of relationship between couples: double first cousins, first cousins, first cousins once removed, second cousins and beyond second cousins. The coefficient of inbreeding (F) was calculated for each couple and the mean coefficient of inbreeding (α) estimated for each population. The proportion of consanguineous marriages in the country was 46.2%, ranging from 38.2% in Kabul province to 51.2% in Bamyan province. The equivalent mean inbreeding coefficient (α) was 0.0277, and ranged from 0.0221 to 0.0293 in these two regions. There were significant differences between provinces for frequencies of different types of marriages (p<0.001). First cousin marriages (27.8%) were the most common type of consanguineous marriages, followed by double first cousin (6.9%), second cousin (5.8%), beyond second cousin (3.9%) and first cousin once removed (1.8%). There were significant differences between ethnic groups for the types of marriages (χ2=177.6, df=25, p<0.001). Tajiks (Soni) and Turkmens (also Pashtuns) showed the lowest (α=0.0250) and highest (α=0.0297) mean inbreeding coefficients, respectively, among the ethnic groups in Afghanistan. The study shows that Afghanistan's populations, like other Islamic populations, have a high level of consanguinity.

  19. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  20. Membrane voltage changes in passive dendritic trees: a tapering equivalent cylinder model.

    PubMed

    Poznański, R R

    1988-01-01

    An exponentially tapering equivalent cylinder model is employed in order to approximate the loss of the dendritic trunk parameter observed from anatomical data on apical and basilar dendrites of CA1 and CA3 hippocampal pyramidal neurons. This model allows dendritic trees with a relative paucity of branching to be treated. In particular, terminal branches are not required to end at the same electrotonic distance. The Laplace transform method is used to obtain analytic expressions for the Green's function corresponding to an instantaneous pulse of current injected at a single point along a tapering equivalent cylinder with sealed ends. The time course of the voltage in response to an arbitrary input is computed using the Green's function in a convolution integral. Examples of current input considered are (1) an infinitesimally brief (Dirac delta function) pulse and (2) a step pulse. It is demonstrated that inputs located on a tapering equivalent cylinder are more effective at the soma than identically placed inputs on a nontapering equivalent cylinder. Asymptotic solutions are derived to enable the voltage response behaviour over both relatively short and long time periods to be analysed. Semilogarithmic plots of these solutions provide a basis for estimating the membrane time constant tau m from experimental transients. Transient voltage decrement from a clamped soma reveals that tapering tends to reduce the error associated with inadequate voltage clamping of the dendritic membrane. A formula is derived which shows that tapering tends to increase the estimate of the electrotonic length parameter L.

  1. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  2. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  3. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  4. Modeling the global positioning system signal propagation through the ionosphere

    NASA Technical Reports Server (NTRS)

    Bassiri, S.; Hajj, G. A.

    1992-01-01

    Based on realistic modeling of the electron density of the ionosphere and using a dipole moment approximation for the Earth's magnetic field, one is able to estimate the effect of the ionosphere on the Global Positioning System (GPS) signal for a ground user. The lowest order effect, which is on the order of 0.1-100 m of group delay, is subtracted out by forming a linear combination of the dual frequencies of the GPS signal. One is left with second- and third-order effects that are estimated typically to be approximately 0-2 cm and approximately 0-2 mm at zenith, respectively, depending on the geographical location, the time of day, the time of year, the solar cycle, and the relative geometry of the magnetic field and the line of sight. Given the total electron content along a line of sight, the authors derive an approximation to the second-order term which is accurate to approximately 90 percent within the magnetic dipole moment model; this approximation can be used to reduce the second-order term to the millimeter level, thus potentially improving precise positioning in space and on the ground. The induced group delay, or phase advance, due to second- and third-order effects is examined for two ground receivers located at equatorial and mid-latitude regions tracking several GPS satellites.

  5. Estimation of the REV Size and Equivalent Permeability Coefficient of Fractured Rock Masses with an Emphasis on Comparing the Radial and Unidirectional Flow Configurations

    NASA Astrophysics Data System (ADS)

    Wang, Zhechao; Li, Wei; Bi, Liping; Qiao, Liping; Liu, Richeng; Liu, Jie

    2018-05-01

    A method to estimate the representative elementary volume (REV) size for the permeability and equivalent permeability coefficient of rock mass with a radial flow configuration was developed. The estimations of the REV size and equivalent permeability for the rock mass around an underground oil storage facility using a radial flow configuration were compared with those using a unidirectional flow configuration. The REV sizes estimated using the unidirectional flow configuration are much higher than those estimated using the radial flow configuration. The equivalent permeability coefficient estimated using the radial flow configuration is unique, while those estimated using the unidirectional flow configuration depend on the boundary conditions and flow directions. The influences of the fracture trace length, spacing and gap on the REV size and equivalent permeability coefficient were investigated. The REV size for the permeability of fractured rock mass increases with increasing the mean trace length and fracture spacing. The influence of the fracture gap length on the REV size is insignificant. The equivalent permeability coefficient decreases with the fracture spacing, while the influences of the fracture trace length and gap length are not determinate. The applicability of the proposed method to the prediction of groundwater inflow into rock caverns was verified using the measured groundwater inflow into the facility. The permeability coefficient estimated using the radial flow configuration is more similar to the representative equivalent permeability coefficient than those estimated with different boundary conditions using the unidirectional flow configuration.

  6. Carbon footprint of a music festival

    NASA Astrophysics Data System (ADS)

    Schafer, K. V.

    2009-12-01

    In an effort to curb CO2 and by extension, greenhouse gas emissions various initiatives have been taken statewide, nationally and internationally. However, benchmarks and metrics are not clearly defined for CO2 and CO2 equivalent accounting. The objective of this study is to estimate the carbon footprint of the Lincoln Park Music Festival which occurs annually in Newark, NJ. This festival runs for three days each summer and consists of music, food vendors, merchandise and a green marketplace. In order to determine the carbon footprint generated by transportation, surveys of participants were analyzed. Of the approximately 40,000 participants in 2009, 3.3% were surveyed. About 30% of respondents commuted to the festival by car with an average of 10 miles traveling distance. Transportation emission amounted to an estimated CO2 emission of 188 metric tons for all three days combined. Trash at the music festival was weighed, components estimated, and potential CO2 emission calculated if incinerated. 63% of the trash was found to be carbon based, which is the equivalent of three metric tons of CO2 if incinerated. The majority of the trash (>60%) could have been recycled, thus significantly reducing the carbon footprint. In order to limit the carbon footprint of this festival, alternative transport options would be advisable as transport accounted for the largest proportion of the carbon footprint at this festival.

  7. Using open robust design models to estimate temporary emigration from capture-recapture data.

    PubMed

    Kendall, W L; Bjorkland, R

    2001-12-01

    Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.

  8. Using open robust design models to estimate temporary emigration from capture-recapture data

    USGS Publications Warehouse

    Kendall, W.L.; Bjorkland, R.

    2001-01-01

    Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.

  9. A direct method for synthesizing low-order optimal feedback control laws with application to flutter suppression

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1980-01-01

    A direct method of synthesizing a low-order optimal feedback control law for a high order system is presented. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean square steady state responses and control inputs. The controller is shown to be equivalent to a partial state estimator. The method is applied to the problem of active flutter suppression. Numerical results are presented for a 20th order system representing an aeroelastic wind-tunnel wing model. Low-order controllers (fourth and sixth order) are compared with a full order (20th order) optimal controller and found to provide near optimal performance with adequate stability margins.

  10. Development of a nonlinear vortex method

    NASA Technical Reports Server (NTRS)

    Kandil, O. A.

    1982-01-01

    Steady and unsteady Nonliner Hybrid Vortex (NHV) method, for low aspect ratio wings at large angles of attack, is developed. The method uses vortex panels with first-order vorticity distribution (equivalent to second-order doublet distribution) to calculate the induced velocity in the near field using closed form expressions. In the far field, the distributed vorticity is reduced to concentrated vortex lines and the simpler Biot-Savart's law is employed. The method is applied to rectangular wings in steady and unsteady flows without any restriction on the order of magnitude of the disturbances in the flow field. The numerical results show that the method accurately predicts the distributed aerodynamic loads and that it is of acceptable computational efficiency.

  11. Design and validation of a high-order weighted-frequency fourier linear combiner-based Kalman filter for parkinsonian tremor estimation.

    PubMed

    Zhou, Y; Jenkins, M E; Naish, M D; Trejos, A L

    2016-08-01

    The design of a tremor estimator is an important part of designing mechanical tremor suppression orthoses. A number of tremor estimators have been developed and applied with the assumption that tremor is a mono-frequency signal. However, recent experimental studies have shown that Parkinsonian tremor consists of multiple frequencies, and that the second and third harmonics make a large contribution to the tremor. Thus, the current estimators may have limited performance on estimation of the tremor harmonics. In this paper, a high-order tremor estimation algorithm is proposed and compared with its lower-order counterpart and a widely used estimator, the Weighted-frequency Fourier Linear Combiner (WFLC), using 18 Parkinsonian tremor data sets. The results show that the proposed estimator has better performance than its lower-order counterpart and the WFLC. The percentage estimation accuracy of the proposed estimator is 85±2.9%, an average improvement of 13% over the lower-order counterpart. The proposed algorithm holds promise for use in wearable tremor suppression devices.

  12. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  13. A comparison of two methods of in vivo dosimetry for a high energy neutron beam.

    PubMed

    Blake, S W; Bonnett, D E; Finch, J

    1990-06-01

    Two methods of in vivo dosimetry have been compared in a high energy neutron beam. These were activation dosimetry and thermoluminescence dosimetry (TLD). Their suitability was determined by comparison with estimates of total dose, obtained using a tissue equivalent ionization chamber. Measurements were made on the central axis and a profile of a 10 x 10 cm square field and also behind a shielding block in order to simulate conditions of clinical use. The TLD system was found to provide the best estimate of total dose.

  14. Estimation of U.S. Timber Harvest Using Roundwood Equivalents

    Treesearch

    James Howard

    2006-01-01

    This report details the procedure used to estimate the roundwood products portion of U.S. annual timber harvest levels by using roundwood equivalents. National-level U.S. forest products data published by trade associations and State and Federal Government organizations were used to estimate the roundwood equivalent of national roundwood products production. The...

  15. A Kramers-Moyal approach to the analysis of third-order noise with applications in option valuation.

    PubMed

    Popescu, Dan M; Lipan, Ovidiu

    2015-01-01

    We propose the use of the Kramers-Moyal expansion in the analysis of third-order noise. In particular, we show how the approach can be applied in the theoretical study of option valuation. Despite Pawula's theorem, which states that a truncated model may exhibit poor statistical properties, we show that for a third-order Kramers-Moyal truncation model of an option's and its underlier's price, important properties emerge: (i) the option price can be written in a closed analytical form that involves the Airy function, (ii) the price is a positive function for positive skewness in the distribution, (iii) for negative skewness, the price becomes negative only for price values that are close to zero. Moreover, using third-order noise in option valuation reveals additional properties: (iv) the inconsistencies between two popular option pricing approaches (using a "delta-hedged" portfolio and using an option replicating portfolio) that are otherwise equivalent up to the second moment, (v) the ability to develop a measure R of how accurately an option can be replicated by a mixture of the underlying stocks and cash, (vi) further limitations of second-order models revealed by introducing third-order noise.

  16. A Kramers-Moyal Approach to the Analysis of Third-Order Noise with Applications in Option Valuation

    PubMed Central

    Popescu, Dan M.; Lipan, Ovidiu

    2015-01-01

    We propose the use of the Kramers-Moyal expansion in the analysis of third-order noise. In particular, we show how the approach can be applied in the theoretical study of option valuation. Despite Pawula’s theorem, which states that a truncated model may exhibit poor statistical properties, we show that for a third-order Kramers-Moyal truncation model of an option’s and its underlier’s price, important properties emerge: (i) the option price can be written in a closed analytical form that involves the Airy function, (ii) the price is a positive function for positive skewness in the distribution, (iii) for negative skewness, the price becomes negative only for price values that are close to zero. Moreover, using third-order noise in option valuation reveals additional properties: (iv) the inconsistencies between two popular option pricing approaches (using a “delta-hedged” portfolio and using an option replicating portfolio) that are otherwise equivalent up to the second moment, (v) the ability to develop a measure R of how accurately an option can be replicated by a mixture of the underlying stocks and cash, (vi) further limitations of second-order models revealed by introducing third-order noise. PMID:25625856

  17. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  18. Oracle estimation of parametric models under boundary constraints.

    PubMed

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  19. Aging Theories for Establishing Safe Life Spans of Airborne Critical Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    2003-01-01

    New aging theories have been developed to establish the safe life span of airborne critical structural components such as B-52B aircraft pylon hooks for carrying air-launch drop-test vehicles. The new aging theories use the equivalent-constant-amplitude loading spectrum to represent the actual random loading spectrum with the same damaging effect. The crack growth due to random loading cycling of the first flight is calculated using the half-cycle theory, and then extrapolated to all the crack growths of the subsequent flights. The predictions of the new aging theories (finite difference aging theory and closed-form aging theory) are compared with the classical flight-test life theory and the previously developed Ko first- and Ko second-order aging theories. The new aging theories predict the number of safe flights as considerably lower than that predicted by the classical aging theory, and slightly lower than those predicted by the Ko first- and Ko second-order aging theories due to the inclusion of all the higher order terms.

  20. Simulation of scattered fields: Some guidelines for the equivalent source method

    NASA Astrophysics Data System (ADS)

    Gounot, Yves J. R.; Musafir, Ricardo E.

    2011-07-01

    Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.

  1. A mechanical model of metatarsal stress fracture during distance running.

    PubMed

    Gross, T S; Bunch, R P

    1989-01-01

    A model of metatarsal mechanics has been proposed as a link between the high incidence of second and third metatarsal stress fractures and the large stresses measured beneath the second and third metatarsal heads during distance running. Eight discrete piezoelectric vertical stress transducers were used to record the forefoot stresses of 21 male distance runners. Based upon load bearing area estimates derived from footprints, plantar forces were estimated. Highest force was estimated beneath the second and first metatarsal head (341.1 N and 279.1 N, respectively). Considering the toe as a hinged cantilever and the metatarsal as a proximally attached rigid cantilever allowed estimation of metatarsal midshaft bending strain, shear, and axial forces. Bending strain was estimated to be greatest in the second metatarsal (6662 mu epsilon), a value 6.9 times greater than estimated first metatarsal strain. Predicted third, fourth, and fifth metatarsal strains ranged between 4832 and 5241 mu epsilon. Shear force estimates were also greatest in the second metatarsal (203.0 N). Axial forces were highest in the first metatarsal (593.2 N) due to large hallux forces in relationship to the remaining toes. Although a first order model, these data highlight the structural demands placed upon the second metatarsal, a location of high metatarsal stress fracture incidence during distance running.

  2. Estimation of absolute solvent and solvation shell entropies via permutation reduction

    NASA Astrophysics Data System (ADS)

    Reinhard, Friedemann; Grubmüller, Helmut

    2007-01-01

    Despite its prominent contribution to the free energy of solvated macromolecules such as proteins or DNA, and although principally contained within molecular dynamics simulations, the entropy of the solvation shell is inaccessible to straightforward application of established entropy estimation methods. The complication is twofold. First, the configurational space density of such systems is too complex for a sufficiently accurate fit. Second, and in contrast to the internal macromolecular dynamics, the configurational space volume explored by the diffusive motion of the solvent molecules is too large to be exhaustively sampled by current simulation techniques. Here, we develop a method to overcome the second problem and to significantly alleviate the first one. We propose to exploit the permutation symmetry of the solvent by transforming the trajectory in a way that renders established estimation methods applicable, such as the quasiharmonic approximation or principal component analysis. Our permutation-reduced approach involves a combinatorial problem, which is solved through its equivalence with the linear assignment problem, for which O(N3) methods exist. From test simulations of dense Lennard-Jones gases, enhanced convergence and improved entropy estimates are obtained. Moreover, our approach renders diffusive systems accessible to improved fit functions.

  3. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  4. Daylight time-resolved photographs of lightning.

    PubMed

    Qrville, R E; Lala, G G; Idone, V P

    1978-07-07

    Lightning dart leaders and return strokes have been recorded in daylight with both good spatial resolution and good time resolution as part of the Thunder-storm Research International Program. The resulting time-resolved photographs are apparently equivalent to the best data obtained earlier only at night. Average two-dimensional return stroke velocities in four subsequent strokes between the ground and a height of 1400 meters were approximately 1.3 x 10(8) meters per second. The estimated systematic error is 10 to 15 percent.

  5. The equilibrium-diffusion limit for radiation hydrodynamics

    DOE PAGES

    Ferguson, J. M.; Morel, J. E.; Lowrie, R.

    2017-07-27

    The equilibrium-diffusion approximation (EDA) is used to describe certain radiation-hydrodynamic (RH) environments. When this is done the RH equations reduce to a simplified set of equations. The EDA can be derived by asymptotically analyzing the full set of RH equations in the equilibrium-diffusion limit. Here, we derive the EDA this way and show that it and the associated set of simplified equations are both first-order accurate with transport corrections occurring at second order. Having established the EDA’s first-order accuracy we then analyze the grey nonequilibrium-diffusion approximation and the grey Eddington approximation and show that they both preserve this first-order accuracy.more » Further, these approximations preserve the EDA’s first-order accuracy when made in either the comoving-frame (CMF) or the lab-frame (LF). And while analyzing the Eddington approximation, we found that the CMF and LF radiation-source equations are equivalent when neglecting O(β 2) terms and compared in the LF. Of course, the radiation pressures are not equivalent. It is expected that simplified physical models and numerical discretizations of the RH equations that do not preserve this first-order accuracy will not retain the correct equilibrium-diffusion solutions. As a practical example, we show that nonequilibrium-diffusion radiative-shock solutions devolve to equilibrium-diffusion solutions when the asymptotic parameter is small.« less

  6. Stability of nonuniform rotor blades in hover using a mixed formulation

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Hodges, D. H.; Avila, J. H.; Kung, R. M.

    1980-01-01

    A mixed formulation for calculating static equilibrium and stability eigenvalues of nonuniform rotor blades in hover is presented. The static equilibrium equations are nonlinear and are solved by an accurate and efficient collocation method. The linearized perturbation equations are solved by a one step, second order integration scheme. The numerical results correlate very well with published results from a nearly identical stability analysis based on a displacement formulation. Slight differences in the results are traced to terms in the equations that relate moments to derivatives of rotations. With the present ordering scheme, in which terms of the order of squares of rotations are neglected with respect to unity, it is not possible to achieve completely equivalent models based on mixed and displacement formulations. The one step methods reveal that a second order Taylor expansion is necessary to achieve good convergence for nonuniform rotating blades. Numerical results for a hypothetical nonuniform blade, including the nonlinear static equilibrium solution, were obtained with no more effort or computer time than that required for a uniform blade.

  7. Consensus for second-order multi-agent systems with position sampled data

    NASA Astrophysics Data System (ADS)

    Wang, Rusheng; Gao, Lixin; Chen, Wenhai; Dai, Dameng

    2016-10-01

    In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated. The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example. Project supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LY13F030005) and the National Natural Science Foundation of China (Grant No. 61501331).

  8. Quantifying aflatoxins in peanuts using fluorescence spectroscopy coupled with multi-way methods: Resurrecting second-order advantage in excitation-emission matrices with rank overlap problem

    NASA Astrophysics Data System (ADS)

    Sajjadi, S. Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila

    2016-03-01

    A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained.

  9. Spectrum Modal Analysis for the Detection of Low-Altitude Windshear with Airborne Doppler Radar

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew W.

    1992-01-01

    A major obstacle in the estimation of windspeed patterns associated with low-altitude windshear with an airborne pulsed Doppler radar system is the presence of strong levels of ground clutter which can strongly bias a windspeed estimate. Typical solutions attempt to remove the clutter energy from the return through clutter rejection filtering. Proposed is a method whereby both the weather and clutter modes present in a return spectrum can be identified to yield an unbiased estimate of the weather mode without the need for clutter rejection filtering. An attempt will be made to show that modeling through a second order extended Prony approach is sufficient for the identification of the weather mode. A pattern recognition approach to windspeed estimation from the identified modes is derived and applied to both simulated and actual flight data. Comparisons between windspeed estimates derived from modal analysis and the pulse-pair estimator are included as well as associated hazard factors. Also included is a computationally attractive method for estimating windspeeds directly from the coefficients of a second-order autoregressive model. Extensions and recommendations for further study are included.

  10. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    PubMed Central

    Bednarz, Bryan; Hancox, Cindy; Xu, X George

    2012-01-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment. PMID:19671968

  11. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    NASA Astrophysics Data System (ADS)

    Bednarz, Bryan; Hancox, Cindy; Xu, X. George

    2009-09-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment.

  12. Dose Equivalents for Antipsychotic Drugs: The DDD Method.

    PubMed

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Davis, John M

    2016-07-01

    Dose equivalents of antipsychotics are an important but difficult to define concept, because all methods have weaknesses and strongholds. We calculated dose equivalents based on defined daily doses (DDDs) presented by the World Health Organisation's Collaborative Center for Drug Statistics Methodology. Doses equivalent to 1mg olanzapine, 1mg risperidone, 1mg haloperidol, and 100mg chlorpromazine were presented and compared with the results of 3 other methods to define dose equivalence (the "minimum effective dose method," the "classical mean dose method," and an international consensus statement). We presented dose equivalents for 57 first-generation and second-generation antipsychotic drugs, available as oral, parenteral, or depot formulations. Overall, the identified equivalent doses were comparable with those of the other methods, but there were also outliers. The major strength of this method to define dose response is that DDDs are available for most drugs, including old antipsychotics, that they are based on a variety of sources, and that DDDs are an internationally accepted measure. The major limitations are that the information used to estimate DDDS is likely to differ between the drugs. Moreover, this information is not publicly available, so that it cannot be reviewed. The WHO stresses that DDDs are mainly a standardized measure of drug consumption, and their use as a measure of dose equivalence can therefore be misleading. We, therefore, recommend that if alternative, more "scientific" dose equivalence methods are available for a drug they should be preferred to DDDs. Moreover, our summary can be a useful resource for pharmacovigilance studies. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  14. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  15. Interpretation of the instantaneous frequency of phonocardiogram signals

    NASA Astrophysics Data System (ADS)

    Rey, Alexis B.

    2005-06-01

    Short-Time Fourier transforms, Wigner-Ville distribution, and Wavelet Transforms have been commonly used when dealing with non-stationary signals, and they have been known as time-frequency distributions. Also, it is commonly intended to investigate the behaviour of phonocardiogram signals as a means of prediction some oh the pathologies of the human hart. For this, this paper aims to analyze the relationship between the instantaneous frequency of a PCG signal and the so-mentioned time-frequency distributions; three algorithms using Matlab functions have been developed: the first one, the estimation of the IF using the normalized linear moment, the second one, the estimation of the IF using the periodic first moment, and the third one, the computing of the WVD. Meanwhile, the computing of the STFT spectrogram is carried out with a Matlab function. Several simulations of the spectrogram for a set of PCG signals and the estimation of the IF are shown, and its relationship is validated through correlation. Finally, the second algorithm is a better choice because the estimation is not biased, whereas the WVD is very computing-demanding and offers no benefit since the estimation of the IF by using this TFD has an equivalent result when using the derivative of the phase of the analytic signal, which is also less computing-demanding.

  16. Fourth order difference methods for hyperbolic IBVP's

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.

  17. Dosimetric assessment from 212Pb inhalation at a thorium purification plant.

    PubMed

    Campos, M P; Pecequilo, B R S

    2004-01-01

    At the Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, Brazil, there is a facility (thorium purification plant) where materials with high thorium concentrations are manipulated. In order to estimate afterwards the lung cancer risk for the workers, the thoron daughter (212Pb) levels were assessed and the committed effective and lung committed equivalent doses for workers in place. A total of 28 air filter samples were measured by total alpha counting through the modified Kusnetz method, to determine the 212Pb concentraion. The committed effective dose and lung committed equivalent dose due to 212Pb inhalation were derived from compartmental analysis following the ICRP 66 lung compartmental model, and ICRP 67 lead metabolic model.

  18. Optimum structural sizing of conventional cantilever and joined wing configurations using equivalent beam models

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Chen, J. L.

    1986-01-01

    The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.

  19. Modification of the USLE K factor for soil erodibility assessment on calcareous soils in Iran

    NASA Astrophysics Data System (ADS)

    Ostovari, Yaser; Ghorbani-Dashtaki, Shoja; Bahrami, Hossein-Ali; Naderi, Mehdi; Dematte, Jose Alexandre M.; Kerry, Ruth

    2016-11-01

    The measurement of soil erodibility (K) in the field is tedious, time-consuming and expensive; therefore, its prediction through pedotransfer functions (PTFs) could be far less costly and time-consuming. The aim of this study was to develop new PTFs to estimate the K factor using multiple linear regression, Mamdani fuzzy inference systems, and artificial neural networks. For this purpose, K was measured in 40 erosion plots with natural rainfall. Various soil properties including the soil particle size distribution, calcium carbonate equivalent, organic matter, permeability, and wet-aggregate stability were measured. The results showed that the mean measured K was 0.014 t h MJ- 1 mm- 1 and 2.08 times less than the estimated mean K (0.030 t h MJ- 1 mm- 1) using the USLE model. Permeability, wet-aggregate stability, very fine sand, and calcium carbonate were selected as independent variables by forward stepwise regression in order to assess the ability of multiple linear regression, Mamdani fuzzy inference systems and artificial neural networks to predict K. The calcium carbonate equivalent, which is not accounted for in the USLE model, had a significant impact on K in multiple linear regression due to its strong influence on the stability of aggregates and soil permeability. Statistical indices in validation and calibration datasets determined that the artificial neural networks method with the highest R2, lowest RMSE, and lowest ME was the best model for estimating the K factor. A strong correlation (R2 = 0.81, n = 40, p < 0.05) between the estimated K from multiple linear regression and measured K indicates that the use of calcium carbonate equivalent as a predictor variable gives a better estimation of K in areas with calcareous soils.

  20. An Energy-Equivalent d+/d− Damage Model with Enhanced Microcrack Closure-Reopening Capabilities for Cohesive-Frictional Materials

    PubMed Central

    Cervera, Miguel; Tesei, Claudia

    2017-01-01

    In this paper, an energy-equivalent orthotropic d+/d− damage model for cohesive-frictional materials is formulated. Two essential mechanical features are addressed, the damage-induced anisotropy and the microcrack closure-reopening (MCR) effects, in order to provide an enhancement of the original d+/d− model proposed by Faria et al. 1998, while keeping its high algorithmic efficiency unaltered. First, in order to ensure the symmetry and positive definiteness of the secant operator, the new formulation is developed in an energy-equivalence framework. This proves thermodynamic consistency and allows one to describe a fundamental feature of the orthotropic damage models, i.e., the reduction of the Poisson’s ratio throughout the damage process. Secondly, a “multidirectional” damage procedure is presented to extend the MCR capabilities of the original model. The fundamental aspects of this approach, devised for generic cyclic conditions, lie in maintaining only two scalar damage variables in the constitutive law, while preserving memory of the degradation directionality. The enhanced unilateral capabilities are explored with reference to the problem of a panel subjected to in-plane cyclic shear, with or without vertical pre-compression; depending on the ratio between shear and pre-compression, an absent, a partial or a complete stiffness recovery is simulated with the new multidirectional procedure. PMID:28772793

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  2. On the Lighthill relationship and sound generation from isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Praskovsky, Alexander; Oncley, Steven

    1994-01-01

    In 1952, Lighthill developed a theory for determining the sound generated by a turbulent motion of a fluid. With some statistical assumptions, Proudman applied this theory to estimate the acoustic power of isotropic turbulence. Recently, Lighthill established a simple relationship that relates the fourth-order retarded time and space covariance of his stress tensor to the corresponding second-order covariance and the turbulent flatness factor, without making statistical assumptions for a homogeneous turbulence. Lilley revisited Proudman's work and applied the Lighthill relationship to evaluate directly the radiated acoustic power from isotropic turbulence. After choosing the time separation dependence in the two-point velocity time and space covariance based on the insights gained from direct numerical simulations, Lilley concluded that the Proudman constant is determined by the turbulent flatness factor and the second-order spatial velocity covariance. In order to estimate the Proudman constant at high Reynolds numbers, we analyzed a unique data set of measurements in a large wind tunnel and atmospheric surface layer that covers a range of the Taylor microscale based on Reynolds numbers 2.0 x 10(exp 3) less than or equal to R(sub lambda) less than or equal to 12.7 x 10(exp 3). Our measurements demonstrate that the Lighthill relationship is a good approximation, providing additional support to Lilley's approach. The flatness factor is found between 2.7 - 3.3 and the second order spatial velocity covariance is obtained. Based on these experimental data, the Proudman constant is estimated to be 0.68 - 3.68.

  3. Invariant classification of second-order conformally flat superintegrable systems

    NASA Astrophysics Data System (ADS)

    Capel, J. J.; Kress, J. M.

    2014-12-01

    In this paper we continue the work of Kalnins et al in classifying all second-order conformally-superintegrable (Laplace-type) systems over conformally flat spaces, using tools from algebraic geometry and classical invariant theory. The results obtained show, through Stäckel equivalence, that the list of known nondegenerate superintegrable systems over three-dimensional conformally flat spaces is complete. In particular, a seven-dimensional manifold is determined such that each point corresponds to a conformal class of superintegrable systems. This manifold is foliated by the nonlinear action of the conformal group in three dimensions. Two systems lie in the same conformal class if and only if they lie in the same leaf of the foliation. This foliation is explicitly described using algebraic varieties formed from representations of the conformal group. The proof of these results rely heavily on Gröbner basis calculations using the computer algebra software packages Maple and Singular.

  4. New classes of modified teleparallel gravity models

    NASA Astrophysics Data System (ADS)

    Bahamonde, Sebastian; Böhmer, Christian G.; Krššák, Martin

    2017-12-01

    New classes of modified teleparallel theories of gravity are introduced. The action of this theory is constructed to be a function of the irreducible parts of torsion f (Tax ,Tten ,Tvec), where Tax ,Tten and Tvec are squares of the axial, tensor and vector components of torsion, respectively. This is the most general (well-motivated) second order teleparallel theory of gravity that can be constructed from the torsion tensor. Different particular second order theories can be recovered from this theory such as new general relativity, conformal teleparallel gravity or f (T) gravity. Additionally, the boundary term B which connects the Ricci scalar with the torsion scalar via R = - T + B can also be incorporated into the action. By performing a conformal transformation, it is shown that the two unique theories which have an Einstein frame are either the teleparallel equivalent of general relativity or f (- T + B) = f (R) gravity, as expected.

  5. Au-iClick mirrors the mechanism of copper catalyzed azide–alkyne cycloaddition (CuAAC)

    DOE PAGES

    Powers, Andrew R.; Ghiviriga, Ion; Abboud, Khalil A.; ...

    2015-07-20

    This report outlines the investigation of the iClick mechanism between gold(I)-azides and gold(I)-acetylides to yield digold triazolates. Isolation of digold triazolate complexes offer compelling support for the role of two copper(I) ions in CuAAC. In addition, a kinetic investigation reveals the reaction is first order in both Au(I)-N 3 and Au(I)-C≡C-R equivalent to C-R, thus second order overall. A Hammett plot with a ρ = 1.02(5) signifies electron-withdrawing groups accelerate the cycloaddition by facilitating the coordination of the second gold ion in a π-complex. Rate inhibition by the addition of free triphenylphosphine to the reaction indicates that ligand dissociation ismore » a prerequisite for the reaction. The mechanistic conclusions mirror those proposed for the CuAAC reaction.« less

  6. Complex Chern-Simons from M5-branes on the squashed three-sphere

    NASA Astrophysics Data System (ADS)

    Córdova, Clay; Jafferis, Daniel L.

    2017-11-01

    We derive an equivalence between the (2,0) superconformal M5-brane field theory dimensionally reduced on a squashed three-sphere, and Chern-Simons theory with complex gauge group. In the reduction, the massless fermions obtain an action which is second order in derivatives and are reinterpreted as ghosts for gauge fixing the emergent non-compact gauge symmetry. A squashing parameter in the geometry controls the imaginary part of the complex Chern-Simons level.

  7. [Preliminary investigation on emission of PCDD/Fs and DL-PCBs through flue gas from coke plants in China].

    PubMed

    Sun, Peng-Cheng; Li, Xiao-Lu; Cheng, Gang; Lu, Yong; Wu, Chang-Min; Wu, Chang-Min; Luo, Jin-Hong

    2014-07-01

    According to the Stockholm Convention, polychlorinated dibenzo-p-dioxins/dibenzofurans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (DL-PCBs) are classified into unintentionally produced persistent organic pollutants (UP-POPs), and named dioxins. Coke production as a thermal process contains organic matters, metal and chlorine, is considered to be a potential source of dioxins. Intensive studies on the emission of dioxins from coking industry are still very scarce. In order to estimate the emission properties of dioxins through coke production, isotope dilution HRGC/HRMS technique was used to determine the concentration of dioxins through flue gas during heating of coal. Three results were obtained. First, total toxic equivalents at each stationary emission source were in the range of 3.9-30.0 pg x m(-3) (at WHO-TEQ) for dioxins which was lower than other thermal processes such as municipal solid waste incineration. Second, higher chlorinated PCDD/Fs were the dominant congeners. Third, emissions of dioxins were dependent on coking pattern. Stamping coking and higher coking chamber may lead to lower emission.

  8. Relevant Scatterers Characterization in SAR Images

    NASA Astrophysics Data System (ADS)

    Chaabouni, Houda; Datcu, Mihai

    2006-11-01

    Recognizing scenes in a single look meter resolution Synthetic Aperture Radar (SAR) images, requires the capability to identify relevant signal signatures in condition of variable image acquisition geometry, arbitrary objects poses and configurations. Among the methods to detect relevant scatterers in SAR images, we can mention the internal coherence. The SAR spectrum splitted in azimuth generates a series of images which preserve high coherence only for particular object scattering. The detection of relevant scatterers can be done by correlation study or Independent Component Analysis (ICA) methods. The present article deals with the state of the art for SAR internal correlation analysis and proposes further extensions using elements of inference based on information theory applied to complex valued signals. The set of azimuth looks images is analyzed using mutual information measures and an equivalent channel capacity is derived. The localization of the "target" requires analysis in a small image window, thus resulting in imprecise estimation of the second order statistics of the signal. For a better precision, a Hausdorff measure is introduced. The method is applied to detect and characterize relevant objects in urban areas.

  9. Cost-Value Analysis and the SAVE: A Work in Progress, But an Option for Localised Decision Making?

    PubMed

    Karnon, Jonathan; Partington, Andrew

    2015-12-01

    Cost-value analysis aims to address the limitations of the quality-adjusted life-year (QALY) by incorporating the strength of public concerns for fairness in the allocation of scarce health care resources. To date, the measurement of value has focused on equity weights to reflect societal preferences for the allocation of QALY gains. Another approach is to use a non-QALY-based measure of value, such as an outcome 'equivalent to saving the life of a young person' (a SAVE). This paper assesses the feasibility and validity of using the SAVE as a measure of value for the economic evaluation of health care technologies. A web-based person trade-off (PTO) survey was designed and implemented to estimate equivalent SAVEs for outcome events associated with the progression and treatment of early-stage breast cancer. The estimated equivalent SAVEs were applied to the outputs of an existing decision analytic model for early breast cancer. The web-based PTO survey was undertaken by 1094 respondents. Validation tests showed that 68 % of eligible responses revealed consistent ordering of responses and 32 % displayed ordinal transitivity, while 37 % of respondents showing consistency and ordinal transitivity approached cardinal transitivity. Using consistent and ordinally transitive responses, the mean incremental cost per SAVE gained was £ 3.72 million. Further research is required to improve the validity of the SAVE, which may include a simpler web-based survey format or a face-to-face format to facilitate more informed responses. A validated method for estimating equivalent SAVEs is unlikely to replace the QALY as the globally preferred measure of outcome, but the SAVE may provide a useful alternative for localized decision makers with relatively small, constrained budgets-for example, in programme budgeting and marginal analysis.

  10. Chemistry and kinetics of I2 loss in urine distillate and humidity condensate

    NASA Technical Reports Server (NTRS)

    Atwater, James E.; Wheeler, Richard R., Jr.; Olivadoti, J. T.; Sauer, Richard L.

    1992-01-01

    Time-resolved molecular absorption spectrophotometry of iodinated ersatz humidity condensates and iodinated ersatz urine distillates across the UV and visible spectral regions are used to investigate the chemistry and kinetics of I2 loss in urine distillate and humidity condensate. Single contaminant systems at equivalent concentrations are also employed to study rates of iodine. Pseudo-first order rate constants are identified for ersatz contaminant model mixtures and for individual reactive constituents. The second order bimolecular reaction of elemental iodine with formic acid, producing carbon dioxide and iodine anion, is identified as the primary mechanism underlying the decay of residual I2 in ersatz humidity concentrate.

  11. Neutrons in active proton therapy: Parameterization of dose and dose equivalent.

    PubMed

    Schneider, Uwe; Hälg, Roger A; Lomax, Tony

    2017-06-01

    One of the essential elements of an epidemiological study to decide if proton therapy may be associated with increased or decreased subsequent malignancies compared to photon therapy is an ability to estimate all doses to non-target tissues, including neutron dose. This work therefore aims to predict for patients using proton pencil beam scanning the spatially localized neutron doses and dose equivalents. The proton pencil beam of Gantry 1 at the Paul Scherrer Institute (PSI) was Monte Carlo simulated using GEANT. Based on the simulated neutron dose and neutron spectra an analytical mechanistic dose model was developed. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed model in order to calculate the neutron component of the delivered dose distribution for each treated patient. The neutron dose was estimated for two patient example cases. The analytical neutron dose model represents the three-dimensional Monte Carlo simulated dose distribution up to 85cm from the proton pencil beam with a satisfying precision. The root mean square error between Monte Carlo simulation and model is largest for 138MeV protons and is 19% and 20% for dose and dose equivalent, respectively. The model was successfully integrated into the PSI treatment planning system. In average the neutron dose is increased by 10% or 65% when using 160MeV or 177MeV instead of 138MeV. For the neutron dose equivalent the increase is 8% and 57%. The presented neutron dose calculations allow for estimates of dose that can be used in subsequent epidemiological studies or, should the need arise, to estimate the neutron dose at any point where a subsequent secondary tumour may occur. It was found that the neutron dose to the patient is heavily increased with proton energy. Copyright © 2016. Published by Elsevier GmbH.

  12. Equivalence of the Kelvin-Planck statement of the second law and the principle of entropy increase

    NASA Astrophysics Data System (ADS)

    Sarasua, L. G.; Abal, G.

    2016-09-01

    We present a demonstration of the equivalence between the Kelvin-Planck statement of the second law and the principle of entropy increase. Despite the fundamental importance of these two statements, a rigorous treatment to establish their equivalence is missing in standard physics textbooks. The argument is valid under very general conditions, but is simple and suited to an undergraduate course.

  13. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    PubMed

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  14. Deviation pattern approach for optimizing perturbative terms of QCD renormalization group invariant observables

    NASA Astrophysics Data System (ADS)

    Khellat, M. R.; Mirjalili, A.

    2017-03-01

    We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions to QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lowerorder RG-induced estimates and the corresponding analytical calculations, one could modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of αs4 corrections for the Bjorken sum rule of polarized deep-inelastic scattering and for the non-singlet contribution to the Adler function.

  15. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    NASA Astrophysics Data System (ADS)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  16. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    NASA Astrophysics Data System (ADS)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  17. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  18. Eikonal solutions to optical model coupled-channel equations

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Khandelwal, Govind S.; Maung, Khin M.; Townsend, Lawrence W.; Wilson, John W.

    1988-01-01

    Methods of solution are presented for the Eikonal form of the nucleus-nucleus coupled-channel scattering amplitudes. Analytic solutions are obtained for the second-order optical potential for elastic scattering. A numerical comparison is made between the first and second order optical model solutions for elastic and inelastic scattering of H-1 and He-4 on C-12. The effects of bound-state excitations on total and reaction cross sections are also estimated.

  19. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  20. Navy Public Works Administration.

    DTIC Science & Technology

    1980-06-01

    Real Property by Lease or Space Controlled or to be Leased by the GSA SECNAVINST 11011.18. Subj: Leasing of Department of the Navy Non -Excess Real... equivalent of the specific job order. It is normally initiated by the Control Section Inspector/Estimator or other specifically authorized personnel...49 U.S.C. 1431 An Act to establish a means for effective coordination of Federal research and activities in noise control , to authorize the

  1. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  2. On the maximum principle for complete second-order elliptic operators in general domains

    NASA Astrophysics Data System (ADS)

    Vitolo, Antonio

    This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.

  3. Process for magnetic beneficiating petroleum cracking catalyst

    DOEpatents

    Doctor, R.D.

    1993-10-05

    A process is described for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded. 1 figures.

  4. Process for magnetic beneficiating petroleum cracking catalyst

    DOEpatents

    Doctor, Richard D.

    1993-01-01

    A process for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded.

  5. Multi-transmitter multi-receiver null coupled systems for inductive detection and characterization of metallic objects

    NASA Astrophysics Data System (ADS)

    Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen

    2007-03-01

    Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.

  6. Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics

    NASA Astrophysics Data System (ADS)

    Fruhnert, Michael

    This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.

  7. Radiation exposure from consumer products and miscellaneous sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-01-01

    This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less

  8. Enabling real-time ultrasound imaging of soft tissue mechanical properties by simplification of the shear wave motion equation.

    PubMed

    Engel, Aaron J; Bashford, Gregory R

    2015-08-01

    Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.

  9. Quantifying precambrian crustal extraction: The root is the answer

    USGS Publications Warehouse

    Abbott, D.; Sparks, D.; Herzberg, C.; Mooney, W.; Nikishin, A.; Zhang, Y.-S.

    2000-01-01

    We use two different methods to estimate the total amount of continental crust that was extracted by the end of the Archean and the Proterozoic. The first method uses the sum of the seismic thickness of the crust, the eroded thickness of the crust, and the trapped melt within the lithospheric root to estimate the total crustal volume. This summation method yields an average equivalent thickness of Archean crust of 49 ?? 6 km and an average equivalent thickness of Proterozoic crust of 48 ?? 9 km. Between 7 and 9% of this crust never reached the surface, but remained within the continental root as congealed, iron-rich komatiitic melt. The second method uses experimental models of melting, mantle xenolith compositions, and corrected lithospheric thickness to estimate the amount of crust extracted through time. This melt column method reveals that the average equivalent thickness of Archean crust was 65 ?? 6 km. and the average equivalent thickness of Early Proterozoic crust was 60 ?? 7 km. It is likely that some of this crust remained trapped within the lithospheric root. The discrepancy between the two estimates is attributed to uncertainties in estimates of the amount of trapped, congealed melt, overall crustal erosion, and crustal recycling. Overall, we find that between 29 and 45% of continental crust was extracted by the end of the Archean, most likely by 2.7 Ga. Between 51 and 79% of continental crust was extracted by the end of the Early Proterozoic, most likely by 1.8-2.0 Ga. Our results are most consistent with geochemical models that call upon moderate amounts of recycling of early extracted continental crust coupled with continuing crustal growth (e.g. McLennan, S.M., Taylor, S.R., 1982. Geochemical constraints on the growth of the continental crust. Journal of Geology, 90, 347-361; Veizer, J., Jansen, S.L., 1985. Basement and sedimentary recycling - 2: time dimension to global tectonics. Journal of Geology 93(6), 625-643). Trapped, congealed, iron-rich melt within the lithospheric root may represent some of the iron that is 'missing' from the lower crust. The lower crust within Archean cratons may also have an unexpectedly low iron content because it was extracted from more primitive, undepleted mantle. (C) 2000 Elsevier Science B.V. All rights reserved.

  10. A new dipole-free sum-over-states expression for the second hyperpolarizability

    NASA Astrophysics Data System (ADS)

    Pérez-Moreno, Javier; Clays, Koen; Kuzyk, Mark G.

    2008-02-01

    The generalized Thomas-Kuhn sum rules are used to eliminate the explicit dependence on dipolar terms in the traditional sum-over-states (SOS) expression for the second hyperpolarizability to derive a new, yet equivalent, SOS expression. This new dipole-free expression may be better suited to study the second hyperpolarizability of nondipolar systems such as quadrupolar, octupolar, and dodecapolar structures. The two expressions lead to the same fundamental limits of the off-resonance second hyperpolarizability; and when applied to a particle in a box and a clipped harmonic oscillator, have the same frequency dependence. We propose that the new dipole-free equation, when used in conjunction with the standard SOS expression, can be used to develop a three-state model of the dispersion of the third-order susceptibility that can be applied to molecules in cases where normally many more states would have been required. Furthermore, a comparison between the two expressions can be used as a convergence test of molecular orbital calculations when applied to the second hyperpolarizability.

  11. Optimization of the transmission of observable expectation values and observable statistics in continuous-variable teleportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albano Farias, L.; Stephany, J.

    2010-12-15

    We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less

  12. Nonlocal homogenization theory in metamaterials: Effective electromagnetic spatial dispersion and artificial chirality

    NASA Astrophysics Data System (ADS)

    Ciattoni, Alessandro; Rizza, Carlo

    2015-05-01

    We develop, from first principles, a general and compact formalism for predicting the electromagnetic response of a metamaterial with nonmagnetic inclusions in the long-wavelength limit, including spatial dispersion up to the second order. Specifically, by resorting to a suitable multiscale technique, we show that the effective medium permittivity tensor and the first- and second-order tensors describing spatial dispersion can be evaluated by averaging suitable spatially rapidly varying fields, each satisfying electrostatic-like equations within the metamaterial unit cell. For metamaterials with negligible second-order spatial dispersion, we exploit the equivalence of first-order spatial dispersion and reciprocal bianisotropic electromagnetic response to deduce a simple expression for the metamaterial chirality tensor. Such an expression allows us to systematically analyze the effect of the composite spatial symmetry properties on electromagnetic chirality. We find that even if a metamaterial is geometrically achiral, i.e., it is indistinguishable from its mirror image, it shows pseudo-chiral-omega electromagnetic chirality if the rotation needed to restore the dielectric profile after the reflection is either a 0∘ or 90∘ rotation around an axis orthogonal to the reflection plane. These two symmetric situations encompass two-dimensional and one-dimensional metamaterials with chiral response. As an example admitting full analytical description, we discuss one-dimensional metamaterials whose single chirality parameter is shown to be directly related to the metamaterial dielectric profile by quadratures.

  13. Estimation of outer-middle ear transmission using DPOAEs and fractional-order modeling of human middle ear

    NASA Astrophysics Data System (ADS)

    Naghibolhosseini, Maryam

    Our ability to hear depends primarily on sound waves traveling through the outer and middle ear toward the inner ear. Hence, the characteristics of the outer and middle ear affect sound transmission to/from the inner ear. The role of the middle and outer ear in sound transmission is particularly important for otoacoustic emissions (OAEs), which are sound signals generated in a healthy cochlea, and recorded by a sensitive microphone placed in the ear canal. OAEs are used to evaluate the health and function of the cochlea; however, they are also affected by outer and middle ear characteristics. To better assess cochlear health using OAEs, it is critical to quantify the impact of the outer and middle ear on sound transmission. The reported research introduces a noninvasive approach to estimate outer-middle ear transmission using distortion product otoacoustic emissions (DPOAEs). In addition, the role of the outer and middle ear on sound transmission was investigated by developing a physical/mathematical model, which employed fractional-order lumped elements to include the viscoelastic characteristics of biological tissues. Impedance estimations from wideband refectance measurements were used for parameter fitting of the model. The model was validated comparing its estimates of the outer-middle ear sound transmission with those given by DPOAEs. The outer-middle ear transmission by the model was defined as the sum of forward and reverse outer-middle ear transmissions. To estimate the reverse transmission by the model, the probe-microphone impedance was calculated through estimating the Thevenin-equivalent circuit of the probe-microphone. The Thevenin-equivalent circuit was calculated using measurements in a number of test cavities. Such modeling enhances our understanding of the roles of different parts of the outer and middle ear and how they work together to determine their function. In addition, the model would be potentially helpful in diagnosing pathologies of cochlear or middle ear origin.

  14. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  15. The origin of spurious solutions in computational electromagnetics

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Wu, Jie; Povinelli, L. A.

    1995-01-01

    The origin of spurious solutions in computational electromagnetics, which violate the divergence equations, is deeply rooted in a misconception about the first-order Maxwell's equations and in an incorrect derivation and use of the curl-curl equations. The divergence equations must be always included in the first-order Maxwell's equations to maintain the ellipticity of the system in the space domain and to guarantee the uniqueness of the solution and/or the accuracy of the numerical solutions. The div-curl method and the least-squares method provide rigorous derivation of the equivalent second-order Maxwell's equations and their boundary conditions. The node-based least-squares finite element method (LSFEM) is recommended for solving the first-order full Maxwell equations directly. Examples of the numerical solutions by LSFEM for time-harmonic problems are given to demonstrate that the LSFEM is free of spurious solutions.

  16. Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.

    PubMed

    Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard

    2014-03-01

    Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Modified physiologically equivalent temperature—basics and applications for western European climate

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Chang; Matzarakis, Andreas

    2018-05-01

    A new thermal index, the modified physiologically equivalent temperature (mPET) has been developed for universal application in different climate zones. The mPET has been improved against the weaknesses of the original physiologically equivalent temperature (PET) by enhancing evaluation of the humidity and clothing variability. The principles of mPET and differences between original PET and mPET are introduced and discussed in this study. Furthermore, this study has also evidenced the usability of mPET with climatic data in Freiburg, which is located in Western Europe. Comparisons of PET, mPET, and Universal Thermal Climate Index (UTCI) have shown that mPET gives a more realistic estimation of human thermal sensation than the other two thermal indices (PET, UTCI) for the thermal conditions in Freiburg. Additionally, a comparison of physiological parameters between mPET model and PET model (Munich Energy Balance Model for Individual, namely MEMI) is proposed. The core temperatures and skin temperatures of PET model vary more violently to a low temperature during cold stress than the mPET model. It can be regarded as that the mPET model gives a more realistic core temperature and mean skin temperature than the PET model. Statistical regression analysis of mPET based on the air temperature, mean radiant temperature, vapor pressure, and wind speed has been carried out. The R square (0.995) has shown a well co-relationship between human biometeorological factors and mPET. The regression coefficient of each factor represents the influence of the each factor on changing mPET (i.e., ±1 °C of T a = ± 0.54 °C of mPET). The first-order regression has been considered predicting a more realistic estimation of mPET at Freiburg during 2003 than the other higher order regression model, because the predicted mPET from the first-order regression has less difference from mPET calculated from measurement data. Statistic tests recognize that mPET can effectively evaluate the influences of all human biometeorological factors on thermal environments. Moreover, a first-order regression function can also predict the thermal evaluations of the mPET by using human biometeorological factors in Freiburg.

  18. The use of hospital waste as a fuel. Part one.

    PubMed

    Dagnall, S

    1989-05-01

    The total quantity of hospital waste produced in the UK has been estimated to be 430kte/yr, having a combustible content equivalent to about 190kte of coal; its average gross calorific value (GCV) depends on the type of hospital, but has been estimated to be about 14GJ/te for the teaching and general hospitals which were examined. Hospitals are obliged to incinerate some of these wastes in order to destroy any pathogens which may be present, and although several hospitals have been involved in recovering the energy from this process, a number of such projects have proved to be unsuccessful. The Glenfield General Hospital (GGH) is burning combustible hospital waste on a Corsair (Erithglen) 0.5MWt (2MBtu/h) hot water boiler, the second such installation involving a new design of plant which accepts bagged, unprepared material. Although the plant suffered inevitable commissioning and teething problems, which have led to further design improvements, it has nevertheless demonstrated its ability to dispose of hospital waste reliably, safely and efficiently; it is felt, however, that it could have performed better with improved project organisation. In the light of likely future legislation to tighten control over emissions from the combustion of hospital wastes, it is anticipated that large scale plant might prove economically and environmentally attractive under certain circumstances; such plant will, in all probability, involve power generation or combined heat and power (CHP).

  19. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    NASA Astrophysics Data System (ADS)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  20. THE EQUIVALENCE OF AGE IN ANIMALS

    PubMed Central

    Brody, Samuel; Ragsdale, Arthur C.

    1922-01-01

    1. A method of plotting growth curves is presented which is considered more useful than the usual method in bringing out a number of important phenomena such as the equivalence of age in different animals, difference in the shape and duration of corresponding growth cycles in different animals, and also in determinating the age of maxima without resorting to complicated mathematical computations. 2. It is suggested that after the third cycle is past the conceptional age of the maximum of the third cycle may be taken as the age of reference for estimating the equivalent physiological ages in different animals. Before the age of the third cycle, the maxima of the second and first cycles are most conveniently used as points of reference. 3. It is shown that the product of the conceptional age of the maximum of the third cycle by 13, gives a value which is, with the possible exception of man, very near to the normal duration of life of animals under the most favorable conditions of life. In other words, the equivalent physiological ages in different animals bear an approximately constant linear relation to the duration of their growth periods. 4. Attention is called to certain differences in the shape and duration of the corresponding growth cycles in different animals and of the effect of sex on these cycles. PMID:19871989

  1. An Equivalent Fracture Modeling Method

    NASA Astrophysics Data System (ADS)

    Li, Shaohua; Zhang, Shujuan; Yu, Gaoming; Xu, Aiyun

    2017-12-01

    3D fracture network model is built based on discrete fracture surfaces, which are simulated based on fracture length, dip, aperture, height and so on. The interesting area of Wumishan Formation of Renqiu buried hill reservoir is about 57 square kilometer and the thickness of target strata is more than 2000 meters. In addition with great fracture density, the fracture simulation and upscaling of discrete fracture network model of Wumishan Formation are very intense computing. In order to solve this problem, a method of equivalent fracture modeling is proposed. First of all, taking the fracture interpretation data obtained from imaging logging and conventional logging as the basic data, establish the reservoir level model, and then under the constraint of reservoir level model, take fault distance analysis model as the second variable, establish fracture density model by Sequential Gaussian Simulation method. Increasing the width, height and length of fracture, at the same time decreasing its density in order to keep the similar porosity and permeability after upscaling discrete fracture network model. In this way, the fracture model of whole interesting area can be built within an accepted time.

  2. Depositional architecture and sequence stratigraphy of the Upper Jurassic Hanifa Formation, central Saudi Arabia

    NASA Astrophysics Data System (ADS)

    El-Sorogy, Abdelbaset; Al-Kahtany, Khaled; Almadani, Sattam; Tawfik, Mohamed

    2018-03-01

    To document the depositional architecture and sequence stratigraphy of the Upper Jurassic Hanifa Formation in central Saudi Arabia, three composite sections were examined, measured and thin section analysed at Al-Abakkayn, Sadous and Maashabah mountains. Fourteen microfacies types were identified, from wackestones to boundstones and which permits the recognition of five lithofacies associations in a carbonate platform. Lithofacies associations range from low energy, sponges, foraminifers and bioclastic burrowed offshoal deposits to moderate lithoclstic, peloidal and bioclastic foreshoal deposits in the lower part of the Hanifa while the upper part is dominated by corals, ooidal and peloidal high energy shoal deposits to moderate to low energy peloidal, stromatoporoids and other bioclastics back shoal deposits. The studied Hanifa Formation exhibits an obvious cyclicity, distinguishing from vertical variations in lithofacies types. These microfacies types are arranged in two third order sequences, the first sequence is equivalent to the lower part of the Hanifa Formation (Hawtah member) while the second one is equivalent to the upper part (Ulayyah member). Within these two sequences, there are three to six fourth-order high frequency sequences respectively in the studied sections.

  3. On methods of estimating cosmological bulk flows

    NASA Astrophysics Data System (ADS)

    Nusser, Adi

    2016-01-01

    We explore similarities and differences between several estimators of the cosmological bulk flow, B, from the observed radial peculiar velocities of galaxies. A distinction is made between two theoretical definitions of B as a dipole moment of the velocity field weighted by a radial window function. One definition involves the three-dimensional (3D) peculiar velocity, while the other is based on its radial component alone. Different methods attempt at inferring B for either of these definitions which coincide only for the case of a velocity field which is constant in space. We focus on the Wiener Filtering (WF) and the Constrained Minimum Variance (CMV) methodologies. Both methodologies require a prior expressed in terms of the radial velocity correlation function. Hoffman et al. compute B in Top-Hat windows from a WF realization of the 3D peculiar velocity field. Feldman et al. infer B directly from the observed velocities for the second definition of B. The WF methodology could easily be adapted to the second definition, in which case it will be equivalent to the CMV with the exception of the imposed constraint. For a prior with vanishing correlations or very noisy data, CMV reproduces the standard Maximum Likelihood estimation for B of the entire sample independent of the radial weighting function. Therefore, this estimator is likely more susceptible to observational biases that could be present in measurements of distant galaxies. Finally, two additional estimators are proposed.

  4. Estimation of two ordered mean residual lifetime functions.

    PubMed

    Ebrahimi, N

    1993-06-01

    In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.

  5. A screening level risk assessment of the indirect impacts from the Columbus Waste to Energy facility in Columbus, Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorber, M.; Cleverly, D.; Schaum, J.

    1996-12-31

    Testing for emissions of dioxins from the stack of the Columbus, Ohio Waste to Energy (WTE) municipal solid waste combustion facility in 1992 implied that dioxin emissions could approach 1,000 grams of dioxin toxic equivalents (TEQs) per year. The incinerator has been in operation since the early 1980s. Several varying activities to further evaluate or curtail emissions were conducted by local, state and federal agencies in 1994. Also in that year, US EPA`s Region 5 issued an emergency order under Section 7003 of RCRA requiring the facility to install maximum Achievable Control Technology (MACT). As part of their justification formore » this emergency order, Region 5 used a screening level risk assessment of potential indirect impacts. This paper describes this assessment. The exposure setting is a hypothetical dairy farm where individuals on the farm obtain their beef, milk, and vegetables from home sources. A 70-year exposure scenario is considered, which includes 45 years of facility operation at the pre- and post-MACT emission rates, followed by 25 years of impact due to residual soil concentrations. Soil dermal contact, inhalation, and breast milk exposures were also considered for this assessment. The source term, or dioxin loadings to this setting, were derived from air dispersion modeling of emissions from the Columbus WTE. A key finding of the assessment was that exposures to dioxin in beef and milk dominated the estimated risks, with excess cancer risk form these two pathways estimated at 2.8 {times} 10{sup {minus}4}. A second key finding was that over 90% of a lifetime of impact from these two pathways, and the inhalation and vegetable ingestion pathways, has already occurred due to pre-MACT emissions.« less

  6. Image processing techniques revealing the relationship between the field-measured ambient gamma dose equivalent rate and geological conditions at a granitic area, Velence Mountains, Hungary

    NASA Astrophysics Data System (ADS)

    Beltran Torres, Silvana; Petrik, Attila; Zsuzsanna Szabó, Katalin; Jordan, Gyozo; Szabó, Csaba

    2017-04-01

    In order to estimate the annual dose that the public receive from natural radioactivity, the identification of the potential risk areas is required which, in turn, necessitates understanding the relationship between the spatial distribution of natural radioactivity and the geogenic risk factors (e.g., rock types, dykes, faults, soil conditions, etc.). A detailed spatial analysis of ambient gamma dose equivalent rate was performed in the western side of Velence Mountains, the largest outcropped granitic area in Hungary. In order to assess the role of local geology in the spatial distribution of ambient gamma dose rates, field measurements were carried out at ground level at 300 sites along a 250 m x 250 m regular grid in a total surface of 14.7 km2. Digital image processing methods were applied to identify anomalies, heterogeneities and spatial patterns in the measured gamma dose rates, including local maxima and minima determination, digital cross sections, gradient magnitude and gradient direction, second derivative profile curvature, local variability, lineament density, 2D autocorrelation and directional variogram analyses. Statistical inference showed that different gamma dose rate levels are associated with the rock types (i.e., Carboniferous granite, Pleistocene colluvial, proluvial, deluvial sediments and talus, and Pannonian sand and pebble), with the highest level on the Carboniferous granite including outlying values. Moreover, digital image processing revealed that linear gamma dose rate spatial features are parallel to the SW-NE dyke system and possibly to the NW-SE main fractures. The results of this study underline the importance of understanding the role of geogenic risk factors influencing the ambient gamma dose rate received by public. The study also demonstrates the power of the image processing techniques for the identification of spatial pattern in field-measured geogenic radiation.

  7. Linear and non-linear regression analysis for the sorption kinetics of methylene blue onto activated carbon.

    PubMed

    Kumar, K Vasanth

    2006-10-11

    Batch kinetic experiments were carried out for the sorption of methylene blue onto activated carbon. The experimental kinetics were fitted to the pseudo first-order and pseudo second-order kinetics by linear and a non-linear method. The five different types of Ho pseudo second-order expression have been discussed. A comparison of linear least-squares method and a trial and error non-linear method of estimating the pseudo second-order rate kinetic parameters were examined. The sorption process was found to follow a both pseudo first-order kinetic and pseudo second-order kinetic model. Present investigation showed that it is inappropriate to use a type 1 and type pseudo second-order expressions as proposed by Ho and Blanachard et al. respectively for predicting the kinetic rate constants and the initial sorption rate for the studied system. Three correct possible alternate linear expressions (type 2 to type 4) to better predict the initial sorption rate and kinetic rate constants for the studied system (methylene blue/activated carbon) was proposed. Linear method was found to check only the hypothesis instead of verifying the kinetic model. Non-linear regression method was found to be the more appropriate method to determine the rate kinetic parameters.

  8. Tuning the tetrahedrality of the hydrogen-bonded network of water: Comparison of the effects of pressure and added salts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Saurav, E-mail: saurav7188@gmail.com, E-mail: cyz118212@chemistry.iitd.ac.in; Chakravarty, Charusita

    Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl–H{sub 2}O and bulk SPC/E water spanning the concentration range 0.025–0.300 molefraction of LiCl at 1more » atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl–H{sub 2}O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H{sub 2}O and LiCl–H{sub 2}O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O–O–O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.« less

  9. Tuning the tetrahedrality of the hydrogen-bonded network of water: Comparison of the effects of pressure and added salts

    NASA Astrophysics Data System (ADS)

    Prasad, Saurav; Chakravarty, Charusita

    2016-06-01

    Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl-H2O and bulk SPC/E water spanning the concentration range 0.025-0.300 molefraction of LiCl at 1 atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl-H2O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H2O and LiCl-H2O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O-O-O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.

  10. Blind channel estimation and deconvolution in colored noise using higher-order cumulants

    NASA Astrophysics Data System (ADS)

    Tugnait, Jitendra K.; Gummadavelli, Uma

    1994-10-01

    Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.

  11. Practical theories for service life prediction of critical aerospace structural components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Monaghan, Richard C.; Jackson, Raymond H.

    1992-01-01

    A new second-order theory was developed for predicting the service lives of aerospace structural components. The predictions based on this new theory were compared with those based on the Ko first-order theory and the classical theory of service life predictions. The new theory gives very accurate service life predictions. An equivalent constant-amplitude stress cycle method was proposed for representing the random load spectrum for crack growth calculations. This method predicts the most conservative service life. The proposed use of minimum detectable crack size, instead of proof load established crack size as an initial crack size for crack growth calculations, could give a more realistic service life.

  12. A modified Lorentz theory as a test theory of special relativity

    NASA Technical Reports Server (NTRS)

    Chang, T.; Torr, D. G.; Gagnon, D. R.

    1988-01-01

    Attention has been given recently to a modified Lorentz theory (MLT) that is based on the generalized Galilean transformation. Some explicit formulas within the framework of MLT, dealing with the one-way velocity of light, slow-clock transport, and the Doppler effect are derived. A number of typical experiments are analyzed on this basis. Results indicate that the empirical equivalence between MLT and special relativity is still maintained to second order terms. The results of previous works that predict that the MLT might be distinguished from special relativity at the third order by Doppler centrifuge tests capable of a fractional frequency detection threshold of 10 to the -15th are confirmed.

  13. Effects of Optical Blur Reduction on Equivalent Intrinsic Blur

    PubMed Central

    Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz

    2015-01-01

    Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538

  14. Effects of optical blur reduction on equivalent intrinsic blur.

    PubMed

    Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz

    2015-04-01

    To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.

  15. High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 2. Transport results

    USGS Publications Warehouse

    Naff, R.L.; Haley, D.F.; Sudicky, E.A.

    1998-01-01

    In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.

  16. Improvements to Fidelity, Generation and Implementation of Physics-Based Lithium-Ion Reduced-Order Models

    NASA Astrophysics Data System (ADS)

    Rodriguez Marco, Albert

    Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.

  17. Pedigree-based estimation of covariance between dominance deviations and additive genetic effects in closed rabbit lines considering inbreeding and using a computationally simpler equivalent model.

    PubMed

    Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M

    2017-06-01

    Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.

  18. Spline Laplacian estimate of EEG potentials over a realistic magnetic resonance-constructed scalp surface model.

    PubMed

    Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A

    1996-04-01

    This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.

  19. Testing the Equivalence Principle in an Einstein Elevator: Detector Dynamics and Gravity Perturbations

    NASA Technical Reports Server (NTRS)

    Hubbard, Dorthy (Technical Monitor); Lorenzini, E. C.; Shapiro, I. I.; Cosmo, M. L.; Ashenberg, J.; Parzianello, G.; Iafolla, V.; Nozzoli, S.

    2003-01-01

    We discuss specific, recent advances in the analysis of an experiment to test the Equivalence Principle (EP) in free fall. A differential accelerometer detector with two proof masses of different materials free falls inside an evacuated capsule previously released from a stratospheric balloon. The detector spins slowly about its horizontal axis during the fall. An EP violation signal (if present) will manifest itself at the rotational frequency of the detector. The detector operates in a quiet environment as it slowly moves with respect to the co-moving capsule. There are, however, gravitational and dynamical noise contributions that need to be evaluated in order to define key requirements for this experiment. Specifically, higher-order mass moments of the capsule contribute errors to the differential acceleration output with components at the spin frequency which need to be minimized. The dynamics of the free falling detector (in its present design) has been simulated in order to estimate the tolerable errors at release which, in turn, define the release mechanism requirements. Moreover, the study of the higher-order mass moments for a worst-case position of the detector package relative to the cryostat has led to the definition of requirements on the shape and size of the proof masses.

  20. A comparison of foveated acquisition and tracking performance relative to uniform resolution approaches

    NASA Astrophysics Data System (ADS)

    Dubuque, Shaun; Coffman, Thayne; McCarley, Paul; Bovik, A. C.; Thomas, C. William

    2009-05-01

    Foveated imaging has been explored for compression and tele-presence, but gaps exist in the study of foveated imaging applied to acquisition and tracking systems. Results are presented from two sets of experiments comparing simple foveated and uniform resolution targeting (acquisition and tracking) algorithms. The first experiments measure acquisition performance when locating Gabor wavelet targets in noise, with fovea placement driven by a mutual information measure. The foveated approach is shown to have lower detection delay than a notional uniform resolution approach when using video that consumes equivalent bandwidth. The second experiments compare the accuracy of target position estimates from foveated and uniform resolution tracking algorithms. A technique is developed to select foveation parameters that minimize error in Kalman filter state estimates. Foveated tracking is shown to consistently outperform uniform resolution tracking on an abstract multiple target task when using video that consumes equivalent bandwidth. Performance is also compared to uniform resolution processing without bandwidth limitations. In both experiments, superior performance is achieved at a given bandwidth by foveated processing because limited resources are allocated intelligently to maximize operational performance. These findings indicate the potential for operational performance improvements over uniform resolution systems in both acquisition and tracking tasks.

  1. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  2. Space Object Maneuver Detection Algorithms Using TLE Data

    NASA Astrophysics Data System (ADS)

    Pittelkau, M.

    2016-09-01

    An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.

  3. Performance of Axial-Flow Supersonic Compressor on XJ-55-FF-1 Turbojet Engine. I - Preliminary Performance of Compressor. 1; Preliminary Performance of Compressor

    NASA Technical Reports Server (NTRS)

    Hartmann, Melvin J.; Graham, Robert C.

    1949-01-01

    An investigation was conducted to determine the performance characteristics of the axial-flow supersonic compressor of the XJ-55-FF-1 turbo Jet engine. The test unit consisted of a row of inlet guide vanes and a supersonic rotor; the stator vanes after the rotor were omitted. The maximum pressure ratio produced in the single stage was 2.28 at an equivalent tip speed or 1814 feet per second with an adiabatic efficiency of approximately 0.61, equivalent weight flow of 13.4 pounds per second. The maximum efficiency of 0.79 was obtained at an equivalent tip speed of 801 feet per second.

  4. Calculated organ doses for Mayak production association central hall using ICRP and MCNP.

    PubMed

    Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M

    2003-03-01

    As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.

  5. New method for estimating bacterial cell abundances in natural samples by use of sublimation

    NASA Technical Reports Server (NTRS)

    Glavin, Daniel P.; Cleaves, H. James; Schubert, Michael; Aubrey, Andrew; Bada, Jeffrey L.

    2004-01-01

    We have developed a new method based on the sublimation of adenine from Escherichia coli to estimate bacterial cell counts in natural samples. To demonstrate this technique, several types of natural samples, including beach sand, seawater, deep-sea sediment, and two soil samples from the Atacama Desert, were heated to a temperature of 500 degrees C for several seconds under reduced pressure. The sublimate was collected on a cold finger, and the amount of adenine released from the samples was then determined by high-performance liquid chromatography with UV absorbance detection. Based on the total amount of adenine recovered from DNA and RNA in these samples, we estimated bacterial cell counts ranging from approximately 10(5) to 10(9) E. coli cell equivalents per gram. For most of these samples, the sublimation-based cell counts were in agreement with total bacterial counts obtained by traditional DAPI (4,6-diamidino-2-phenylindole) staining.

  6. Estimating Isometric Tension of Finger Muscle Using Needle EMG Signals and the Twitch Contraction Model

    NASA Astrophysics Data System (ADS)

    Tachibana, Hideyuki; Suzuki, Takafumi; Mabuchi, Kunihiko

    We address an estimation method of isometric muscle tension of fingers, as fundamental research for a neural signal-based prosthesis of fingers. We utilize needle electromyogram (EMG) signals, which have approximately equivalent information to peripheral neural signals. The estimating algorithm comprised two convolution operations. The first convolution is between normal distribution and a spike array, which is detected by needle EMG signals. The convolution estimates the probability density of spike-invoking time in the muscle. In this convolution, we hypothesize that each motor unit in a muscle activates spikes independently based on a same probability density function. The second convolution is between the result of the previous convolution and isometric twitch, viz., the impulse response of the motor unit. The result of the calculation is the sum of all estimated tensions of whole muscle fibers, i.e., muscle tension. We confirmed that there is good correlation between the estimated tension of the muscle and the actual tension, with >0.9 correlation coefficients at 59%, and >0.8 at 89% of all trials.

  7. Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.

    PubMed

    Li, Yuanqing; Amari, Shun-Ichi

    2010-07-01

    In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.

  8. Second-Order Vibrational Lineshapes from the Air/Water Interface.

    PubMed

    Ohno, Paul E; Wang, Hong-Fei; Paesani, Francesco; Skinner, James L; Geiger, Franz M

    2018-05-10

    We explore by means of modeling how absorptive-dispersive mixing between the second- and third-order terms modifies the imaginary χ total (2) responses from air/water interfaces under conditions of varying charge densities and ionic strength. To do so, we use published Im(χ (2) ) and χ (3) spectra of the neat air/water interface that were obtained either from computations or experiments. We find that the χ total (2) spectral lineshapes corresponding to experimentally measured spectra contain significant contributions from both interfacial χ (2) and bulk χ (3) terms at interfacial charge densities equivalent to less than 0.005% of a monolayer of water molecules, especially in the 3100 to 3300 cm -1 frequency region. Additionally, the role of short-range static dipole potentials is examined under conditions mimicking brine. Our results indicate that surface potentials, if indeed present at the air/water interface, manifest themselves spectroscopically in the tightly bonded H-bond network observable in the 3200 cm -1 frequency range.

  9. Expanding the Nomological Net of the Pathological Narcissism Inventory: German Validation and Extension in a Clinical Inpatient Sample.

    PubMed

    Morf, Carolyn C; Schürch, Eva; Küfner, Albrecht; Siegrist, Philip; Vater, Aline; Back, Mitja; Mestel, Robert; Schröder-Abé, Michela

    2017-06-01

    The Pathological Narcissism Inventory (PNI) is a multidimensional measure for assessing grandiose and vulnerable features in narcissistic pathology. The aim of the present research was to construct and validate a German translation of the PNI and to provide further information on the PNI's nomological net. Findings from a first study confirm the psychometric soundness of the PNI and replicate its seven-factor first-order structure. A second-order structure was also supported but with several equivalent models. A second study investigating associations with a broad range of measures ( DSM Axis I and II constructs, emotions, personality traits, interpersonal and dysfunctional behaviors, and well-being) supported the concurrent validity of the PNI. Discriminant validity with the Narcissistic Personality Inventory was also shown. Finally, in a third study an extension in a clinical inpatient sample provided further evidence that the PNI is a useful tool to assess the more pathological end of narcissism.

  10. SABRE-Relay: A Versatile Route to Hyperpolarization.

    PubMed

    Roy, Soumya S; Appleby, Kate M; Fear, Elizabeth J; Duckett, Simon B

    2018-03-01

    Signal Amplification by Reversible Exchange (SABRE) is used to switch on the latent singlet spin order of para-hydrogen (p-H 2 ) so that it can hyperpolarize a substrate (sub = nicotinamide, nicotinate, niacin, pyrimidine, and pyrazine). The substrate then reacts reversibly with [Pt(OTf) 2 (bis-diphenylphosphinopropane)] by displacing OTf - to form [Pt(OTf)(sub)(bis-diphenylphosphinopropane)]OTf. The 31 P NMR signals of these metal complexes prove to be enhanced when the substrate possesses an accessible singlet state or long-lived Zeeman polarization. In the case of pyrazine, the corresponding 31 P signal was 105 ± 8 times larger than expected, which equated to an 8 h reduction in total scan time for an equivalent signal-to-noise ratio under normal acquisition conditions. Hence, p-H 2 derived spin order is successfully relayed into a second metal complex via a suitable polarization carrier (sub). When fully developed, we expect this route involving a second catalyst to successfully hyperpolarize many classes of substrates that are not amenable to the original SABRE method.

  11. SABRE-Relay: A Versatile Route to Hyperpolarization

    PubMed Central

    2018-01-01

    Signal Amplification by Reversible Exchange (SABRE) is used to switch on the latent singlet spin order of para-hydrogen (p-H2) so that it can hyperpolarize a substrate (sub = nicotinamide, nicotinate, niacin, pyrimidine, and pyrazine). The substrate then reacts reversibly with [Pt(OTf)2(bis-diphenylphosphinopropane)] by displacing OTf– to form [Pt(OTf)(sub)(bis-diphenylphosphinopropane)]OTf. The 31P NMR signals of these metal complexes prove to be enhanced when the substrate possesses an accessible singlet state or long-lived Zeeman polarization. In the case of pyrazine, the corresponding 31P signal was 105 ± 8 times larger than expected, which equated to an 8 h reduction in total scan time for an equivalent signal-to-noise ratio under normal acquisition conditions. Hence, p-H2 derived spin order is successfully relayed into a second metal complex via a suitable polarization carrier (sub). When fully developed, we expect this route involving a second catalyst to successfully hyperpolarize many classes of substrates that are not amenable to the original SABRE method. PMID:29432020

  12. Building unbiased estimators from non-gaussian likelihoods with application to shear estimation

    DOE PAGES

    Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; ...

    2015-01-15

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less

  13. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2.« less

  14. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  15. A behavior analytic analogue of learning to use synonyms, syntax, and parts of speech.

    PubMed

    Chase, Philip N; Ellenwood, David W; Madden, Gregory

    2008-01-01

    Matching-to-sample and sequence training procedures were used to develop responding to stimulus classes that were considered analogous to 3 aspects of verbal behavior: identifying synonyms and parts of speech, and using syntax. Matching-to-sample procedures were used to train 12 paired associates from among 24 stimuli. These pairs were analogous to synonyms. Then, sequence characteristics were trained to 6 of the stimuli. The result was the formation of 3 classes of 4 stimuli, with the classes controlling a sequence response analogous to a simple ordering syntax: first, second, and third. Matching-to-sample procedures were then used to add 4 stimuli to each class. These stimuli, without explicit sequence training, also began to control the same sequence responding as the other members of their class. Thus, three 8-member functionally equivalent sequence classes were formed. These classes were considered to be analogous to parts of speech. Further testing revealed three 8-member equivalence classes and 512 different sequences of first, second, and third. The study indicated that behavior analytic procedures may be used to produce some generative aspects of verbal behavior related to simple syntax and semantics.

  16. Multilayer Relaxation and Surface Energies of Metallic Surfaces

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Rodriguez, Agustin M.; Ferrante, John

    1994-01-01

    The perpendicular and parallel multilayer relaxations of fcc (210) surfaces are studied using equivalent crystal theory (ECT). A comparison with experimental and theoretical results is made for AI(210). The effect of uncertainties in the input parameters on the magnitudes and ordering of surface relaxations for this semiempirical method is estimated. A new measure of surface roughness is proposed. Predictions for the multilayer relaxations and surface energies of the (210) face of Cu and Ni are also included.

  17. Stationary variational estimates for the effective response and field fluctuations in nonlinear composites

    NASA Astrophysics Data System (ADS)

    Ponte Castañeda, Pedro

    2016-11-01

    This paper presents a variational method for estimating the effective constitutive response of composite materials with nonlinear constitutive behavior. The method is based on a stationary variational principle for the macroscopic potential in terms of the corresponding potential of a linear comparison composite (LCC) whose properties are the trial fields in the variational principle. When used in combination with estimates for the LCC that are exact to second order in the heterogeneity contrast, the resulting estimates for the nonlinear composite are also guaranteed to be exact to second-order in the contrast. In addition, the new method allows full optimization with respect to the properties of the LCC, leading to estimates that are fully stationary and exhibit no duality gaps. As a result, the effective response and field statistics of the nonlinear composite can be estimated directly from the appropriately optimized linear comparison composite. By way of illustration, the method is applied to a porous, isotropic, power-law material, and the results are found to compare favorably with earlier bounds and estimates. However, the basic ideas of the method are expected to work for broad classes of composites materials, whose effective response can be given appropriate variational representations, including more general elasto-plastic and soft hyperelastic composites and polycrystals.

  18. Surface Features Parameterization and Equivalent Roughness Height Estimation of a Real Subglacial Conduit in the Arctic

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Liu, X.; Mankoff, K. D.; Gulley, J. D.

    2016-12-01

    The surfaces of subglacial conduits are very complex, coupling multi-scale roughness, large sinuosity, and cross-sectional variations together. Those features significantly affect the friction law and drainage efficiency inside the conduit by altering velocity and pressure distributions, thus posing considerable influences on the dynamic development of the conduit. Parameterizing the above surface features is a first step towards understanding their hydraulic influences. A Matlab package is developed to extract the roughness field, the conduit centerline, and associated area and curvature data from the conduit surface, acquired from 3D scanning. By using those data, the characteristic vertical and horizontal roughness scales are then estimated based on the structure functions. The centerline sinuosities, defined through three concepts, i.e., the traditional definition of a fluvial river, entropy-based sinuosity, and curvature-based sinuosity, are also calculated and compared. The cross-sectional area and equivalent circular diameter along the centerline are also calculated. Among those features, the roughness is especially important due to its pivotal role in determining the wall friction, and thus an estimation of the equivalent roughness height is of great importance. To achieve such a goal, the original conduit is firstly simplified into a straight smooth pipe with the same volume and centerline length, and the roughness field obtained above is then reconstructed into the simplified pipe. An OpenFOAM-based Large-eddy-simulation (LES) is then performed based on the reconstructed pipe. Considering that the Reynolds number is of the order 106, and the relative roughness is larger than 5% for 60% of the conduit, we test the validity of the resistance law for completely rough pipe. The friction factor is calculated based on the pressure drop and mean velocity in the simulation. Working together, the equivalent roughness height can be calculated. However, whether the assumption is applicable for the current case, i.e., high relative roughness, is a question. Two other roughness heights, i.e., the vertical roughness scale based on structure functions and viscous sublayer thickness determined from the wall boundary layer are also calculated and compared with the equivalent roughness height.

  19. Size matters: Perceived depth magnitude varies with stimulus height.

    PubMed

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2016-06-01

    Both the upper and lower disparity limits for stereopsis vary with the size of the targets. Recently, Tsirlin, Wilcox, and Allison (2012) suggested that perceived depth magnitude from stereopsis might also depend on the vertical extent of a stimulus. To test this hypothesis we compared apparent depth in small discs to depth in long bars with equivalent width and disparity. We used three estimation techniques: a virtual ruler, a touch-sensor (for haptic estimates) and a disparity probe. We found that depth estimates were significantly larger for the bar stimuli than for the disc stimuli for all methods of estimation and different configurations. In a second experiment, we measured perceived depth as a function of the height of the bar and the radius of the disc. Perceived depth increased with increasing bar height and disc radius suggesting that disparity is integrated along the vertical edges. We discuss size-disparity correlation and inter-neural excitatory connections as potential mechanisms that could account for these results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.

  1. Dimensional Structure and Measurement Invariance of the Schizotypal Personality Questionnaire - Brief Revised (SPQ-BR) Scores Across American and Spanish Samples.

    PubMed

    Fonseca-Pedrero, Eduardo; Cohen, Alex; Ortuño-Sierra, Javier; de Álbeniz, Alicia Pérez; Muñiz, José

    2017-08-01

    The main goal of the present study was to test the measurement equivalence of the Schizotypal Personality Questionnaire - Brief Revised (SPQ-BR) scores in a large sample of Spanish and American non-clinical young adults. The sample was made up of 5,625 young adults (M = 19.65 years; SD = 2.53; 38.5% males). Study of the internal structure, using confirmatory factor analysis (CFA), revealed that SPQ-BR items were grouped in a theoretical internal structure of nine first-order factors. Moreover, three or four second-order factor and bifactor models showed adequate goodness-of-fit indices. Multigroup CFA showed that the nine lower-order factor models of the SPQ-BR had configural and weak measurement invariance and partial strong measurement invariance across country. The reliability of the SPQ-BR scores, estimated with omega, ranged from 0.67 to 0.91. Using the item response theory framework, the SPQ-BR provides more accurate information at the medium and high end of the latent trait. Statistically significant differences were found in the raw scores of the SPQ-BR subscales and dimensions across samples. The American group scored higher than the Spanish group in all SPQ-BR domains except Ideas of Reference and Suspiciousness. The finding of comparable factor structure in cross-cultural samples would lend further support to the continuum model of psychosis spectrum disorders. In addition, these results provide new information about the factor structure of schizotypal traits and support the validity and utility of this measure in cross-cultural research.

  2. Averaging principle for second-order approximation of heterogeneous models with homogeneous models.

    PubMed

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-11-27

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).

  3. Averaging principle for second-order approximation of heterogeneous models with homogeneous models

    PubMed Central

    Fibich, Gadi; Gavious, Arieh; Solan, Eilon

    2012-01-01

    Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569

  4. Computational methods for estimation of parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Murphy, K. A.

    1983-01-01

    Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.

  5. 37 CFR 256.2 - Royalty fee for compulsory license for secondary transmission by cable systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... receipts for the first distant signal equivalent; (3) .668 of 1 per centum of such gross receipts for each of the second, third and fourth distant signal equivalents; and (4) .314 of 1 per centum of such gross receipts for the fifth distant signal equivalent and each additional distant signal equivalent...

  6. 37 CFR 256.2 - Royalty fee for compulsory license for secondary transmission by cable systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... receipts for the first distant signal equivalent; (3) .668 of 1 per centum of such gross receipts for each of the second, third and fourth distant signal equivalents; and (4) .314 of 1 per centum of such gross receipts for the fifth distant signal equivalent and each additional distant signal equivalent...

  7. 37 CFR 256.2 - Royalty fee for compulsory license for secondary transmission by cable systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... receipts for the first distant signal equivalent; (3) .668 of 1 per centum of such gross receipts for each of the second, third and fourth distant signal equivalents; and (4) .314 of 1 per centum of such gross receipts for the fifth distant signal equivalent and each additional distant signal equivalent...

  8. 37 CFR 256.2 - Royalty fee for compulsory license for secondary transmission by cable systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... receipts for the first distant signal equivalent; (3) .668 of 1 per centum of such gross receipts for each of the second, third and fourth distant signal equivalents; and (4) .314 of 1 per centum of such gross receipts for the fifth distant signal equivalent and each additional distant signal equivalent...

  9. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  10. Fiber Bragg grating temperature sensors in a 6.5-MW generator exciter bridge and the development and simulation of its thermal model.

    PubMed

    de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo

    2014-09-05

    This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors.

  11. Fiber Bragg Grating Temperature Sensors in a 6.5-MW Generator Exciter Bridge and the Development and Simulation of Its Thermal Model

    PubMed Central

    de Morais Sousa, Kleiton; Probst, Werner; Bortolotti, Fernando; Martelli, Cicero; da Silva, Jean Carlos Cardozo

    2014-01-01

    This work reports the thermal modeling and characterization of a thyristor. The thyristor is used in a 6.5-MW generator excitation bridge. Temperature measurements are performed using fiber Bragg grating (FBG) sensors. These sensors have the benefits of being totally passive and immune to electromagnetic interference and also multiplexed in a single fiber. The thyristor thermal model consists of a second order equivalent electric circuit, and its power losses lead to an increase in temperature, while the losses are calculated on the basis of the excitation current in the generator. Six multiplexed FBGs are used to measure temperature and are embedded to avoid the effect of the strain sensitivity. The presented results show a relationship between field current and temperature oscillation and prove that this current can be used to determine the thermal model of a thyristor. The thermal model simulation presents an error of 1.5 °C, while the FBG used allows for the determination of the thermal behavior and the field current dependence. Since the temperature is a function of the field current, the corresponding simulation can be used to estimate the temperature in the thyristors. PMID:25198007

  12. Prostate cancer risk prediction based on complete prostate cancer family history.

    PubMed

    Albright, Frederick; Stephenson, Robert A; Agarwal, Neeraj; Teerlink, Craig C; Lowrance, William T; Farnham, James M; Albright, Lisa A Cannon

    2015-03-01

    Prostate cancer (PC) relative risks (RRs) are typically estimated based on status of close relatives or presence of any affected relatives. This study provides RR estimates using extensive and specific PC family history. A retrospective population-based study was undertaken to estimate RRs for PC based on complete family history of PC. A total of 635,443 males, all with ancestral genealogy data, were analyzed. RRs for PC were determined based upon PC rates estimated from males with no PC family history (without PC in first, second, or third degree relatives). RRs were determined for a variety of constellations, for example, number of first through third degree relatives; named (grandfather, father, uncle, cousins, brothers); maternal, paternal relationships, and age of onset. In the 635,443 males analyzed, 18,105 had PC. First-degree RRs ranged from 2.46 (=1 first-degree relative affected, CI = 2.39-2.53) to 7.65 (=4 first-degree relatives affected, CI = 6.28-9.23). Second-degree RRs for probands with 0 affected first-degree relatives ranged from 1.51 (≥1 second-degree relative affected, CI = 1.47-1.56) to 3.09 (≥5 second-degree relatives affected, CI = 2.32-4.03). Third-degree RRs with 0 affected first- and 0 affected second-degree relatives ranged from 1.15 (≥1 affected third-degree relative, CI = 1.12-1.19) to 1.50 (≥5 affected third-degree relatives, CI = 1.35-1.66). RRs based on age at diagnosis were higher for earlier age at diagnoses; for example, RR = 5.54 for ≥1 first-degree relative diagnosed before age 50 years (CI = 1.12-1.19) and RR = 1.78 for >1 second-degree relative diagnosed before age 50 years, CI = 1.33, 2.33. RRs for equivalent maternal versus paternal family history were not significantly different. A more complete PC family history using close and distant relatives and age at diagnosis results in a wider range of estimates of individual RR that are potentially more accurate than RRs estimated from summary family history. The presence of PC in second- and even third-degree relatives contributes significantly to risk. Maternal family history is just as significant as paternal family history. PC RRs based on a proband's complete constellation of affected relatives will allow patients and care providers to make more informed screening, monitoring, and treatment decisions. © 2014 The Authors. The Prostate Published by Wiley Periodicals, Inc.

  13. Statistical mechanics of self-driven Carnot cycles.

    PubMed

    Smith, E

    1999-10-01

    The spontaneous generation and finite-amplitude saturation of sound, in a traveling-wave thermoacoustic engine, are derived as properties of a second-order phase transition. It has previously been argued that this dynamical phase transition, called "onset," has an equivalent equilibrium representation, but the saturation mechanism and scaling were not computed. In this work, the sound modes implementing the engine cycle are coarse-grained and statistically averaged, in a partition function derived from microscopic dynamics on criteria of scale invariance. Self-amplification performed by the engine cycle is introduced through higher-order modal interactions. Stationary points and fluctuations of the resulting phenomenological Lagrangian are analyzed and related to background dynamical currents. The scaling of the stable sound amplitude near the critical point is derived and shown to arise universally from the interaction of finite-temperature disorder, with the order induced by self-amplification.

  14. Applying constraints on model-based methods: Estimation of rate constants in a second order consecutive reaction

    NASA Astrophysics Data System (ADS)

    Kompany-Zareh, Mohsen; Khoshkam, Maryam

    2013-02-01

    This paper describes estimation of reaction rate constants and pure ultraviolet/visible (UV-vis) spectra of the component involved in a second order consecutive reaction between Ortho-Amino benzoeic acid (o-ABA) and Diazoniom ions (DIAZO), with one intermediate. In the described system, o-ABA was not absorbing in the visible region of interest and thus, closure rank deficiency problem did not exist. Concentration profiles were determined by solving differential equations of the corresponding kinetic model. In that sense, three types of model-based procedures were applied to estimate the rate constants of the kinetic system, according to Levenberg/Marquardt (NGL/M) algorithm. Original data-based, Score-based and concentration-based objective functions were included in these nonlinear fitting procedures. Results showed that when there is error in initial concentrations, accuracy of estimated rate constants strongly depends on the type of applied objective function in fitting procedure. Moreover, flexibility in application of different constraints and optimization of the initial concentrations estimation during the fitting procedure were investigated. Results showed a considerable decrease in ambiguity of obtained parameters by applying appropriate constraints and adjustable initial concentrations of reagents.

  15. Neural network disturbance observer-based distributed finite-time formation tracking control for multiple unmanned helicopters.

    PubMed

    Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi

    2018-02-01

    The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    PubMed

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  17. Towards a new approach to model guidance laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borne, P.; Duflos, E.; Vanheeghe, P.

    1994-12-31

    Proportional navigation laws have been widely used and studied. Nevertheless very few publications explain rigorously the origin of all these laws. For researchers who are starting to work on guidance laws, a feeling of confusion can result. For others, this lack of explanation can be, for example, the source of the difficulties to make the true proportional navigation become equivalent to the pure proportional navigation. The authors propose here a way to model guidance laws in order to fill this lack of explanation. The first consequence is a better exploration of the kinematic behaviors arising during the guidance process. Themore » second consequence is the definition of a new 3D guidance law which can be seen as a generalization of the true proportional navigation. Moreover, this generalization allow this last law to become equivalent to the pure proportional navigation in terms of initial conditions which allow the object to reach its target.« less

  18. Human Capital Background and the Educational Attainment of Second-Generation Immigrants in France

    ERIC Educational Resources Information Center

    Dos Santos, Manon Domingues; Wolff, Francois-Charles

    2011-01-01

    In this paper, we study the impact of parental human capital background on ethnic educational gaps between second-generation immigrants using a large data set conducted in France in 2003. Estimates from censored random effect ordered Probit regressions show that the skills of immigrants explain in the most part, the ethnic educational gap between…

  19. Modelling second malignancy risks from low dose rate and high dose rate brachytherapy as monotherapy for localised prostate cancer.

    PubMed

    Murray, Louise; Mason, Joshua; Henry, Ann M; Hoskin, Peter; Siebert, Frank-Andre; Venselaar, Jack; Bownes, Peter

    2016-08-01

    To estimate the risks of radiation-induced rectal and bladder cancers following low dose rate (LDR) and high dose rate (HDR) brachytherapy as monotherapy for localised prostate cancer and compare to external beam radiotherapy techniques. LDR and HDR brachytherapy monotherapy plans were generated for three prostate CT datasets. Second cancer risks were assessed using Schneider's concept of organ equivalent dose. LDR risks were assessed according to a mechanistic model and a bell-shaped model. HDR risks were assessed according to a bell-shaped model. Relative risks and excess absolute risks were estimated and compared to external beam techniques. Excess absolute risks of second rectal or bladder cancer were low for both LDR (irrespective of the model used for calculation) and HDR techniques. Average excess absolute risks of rectal cancer for LDR brachytherapy according to the mechanistic model were 0.71 per 10,000 person-years (PY) and 0.84 per 10,000 PY respectively, and according to the bell-shaped model, were 0.47 and 0.78 per 10,000 PY respectively. For HDR, the average excess absolute risks for second rectal and bladder cancers were 0.74 and 1.62 per 10,000 PY respectively. The absolute differences between techniques were very low and clinically irrelevant. Compared to external beam prostate radiotherapy techniques, LDR and HDR brachytherapy resulted in the lowest risks of second rectal and bladder cancer. This study shows both LDR and HDR brachytherapy monotherapy result in low estimated risks of radiation-induced rectal and bladder cancer. LDR resulted in lower bladder cancer risks than HDR, and lower or similar risks of rectal cancer. In absolute terms these differences between techniques were very small. Compared to external beam techniques, second rectal and bladder cancer risks were lowest for brachytherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Equivalent Mass versus Life Cycle Cost for Life Support Technology Selection

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2003-01-01

    The decision to develop a particular life support technology or to select it for flight usually depends on the cost to develop and fly it. Other criteria - performance, safety, reliability, crew time, and risk - are considered, but cost is always an important factor. Because launch cost accounts for most of the cost of planetary missions, and because launch cost is directly proportional to the mass launched, equivalent mass has been used instead of cost to select life support technology. The equivalent mass of a life support system includes the estimated masses of the hardware and of the pressurized volume, power supply, and cooling system that the hardware requires. The equivalent mass is defined as the total payload launch mass needed to provide and support the system. An extension of equivalent mass, Equivalent System Mass (ESM), has been established for use in Advanced Life Support. A crew time mass-equivalent and sometimes other non-mass factors are added to equivalent mass to create ESM. Equivalent mass is an estimate of the launch cost only. For earth orbit rather than planetary missions, the launch cost is usually exceeded by the cost of Design, Development, Test, and Evaluation (DDT&E). Equivalent mass is used only in life support analysis. Life Cycle Cost (LCC) is much more commonly used. LCC includes DDT&E, launch, and operations costs. Since LCC includes launch cost, it is always a more accurate cost estimator than equivalent mass. The relative costs of development, launch, and operations vary depending on the mission design, destination, and duration. Since DDT&E or operations may cost more than launch, LCC may give a more accurate cost ranking than equivalent mass. To be sure of identifying the lowest cost technology for a particular mission, we should use LCC rather than equivalent mass.

  1. "Galileo Airborne Test Of Equivalence"-Gate

    NASA Astrophysics Data System (ADS)

    Nobili, A. M.; Unnikrishnan, C. S.; Suresh, D.

    A differential Galileo-type mass dropping experiment named GAL was proposed at the University of Pisa in 1986 and completed at CERN in 1992 (Carusotto et al., PRL 69, 1722) in order to test the Equivalence Principle by testing the Universality of Free Fall. The free falling mass was a disk made of two half disks of different composition; a violation of equivalence would produce an angular acceleration of the disk around its symmetry axis, which was measured with a modified Michelson interferometer. GATE -``Galileo Airborne Test of Equivalence'' is a variant of that experiment to be performed in parabolic flight on-board the ``Airbus A300 Zero-g'' aircraft of the European Space Agency (ESA). The main advantages of GATE with respect to GAL are the longer time of free fall and the absence of weight in the final stage of unlocking. The longer time of fall makes the signal stronger (the signal grows quadratically with the time of fall); unlocking at zero-g can significantly reduce spurious angular accelerations of the disk due to inevitable imperfections in the locking/unlocking mechanism, which turned out to be the limiting factor in GAL. A preliminary estimate indicates that GATE should be able to achieve a sensitivity η ≡ Δ g/g≃ 10-13, an improvement by about 3 orders of magnitude with respect to GAL and by about 1 order of magnitude with respect to the best result obtained with a slowly rotating torsion balance by the ``Eöt-Wash'' group at the University of Washington. Ground tests of the read-out and of the locking/unlocking disturbances can be carried out prior to the aircraft experiment. Locking/unlocking tests, retrieval tests, as well as tests of the aircraft environment can be performed onboard the Airbus A-300 in preparation for the actual experiment. The GATE experiment can be viewed as an Equivalence Principle test of intermediate sensitivity between torsion balance ground tests (10-12), balloon or micro-satellite (150 kg) tests (GREAT and μ SCOPE: ≃ 10-15), small-satellite (300 kg) room temperature tests (GG: ≃ 10-17), large-satellite (1 ton) cryogenic tests (STEP: ≃ 10-18)

  2. Effective potentials in nonlinear polycrystals and quadrature formulae

    NASA Astrophysics Data System (ADS)

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471, 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  3. Effective potentials in nonlinear polycrystals and quadrature formulae.

    PubMed

    Michel, Jean-Claude; Suquet, Pierre

    2017-08-01

    This study presents a family of estimates for effective potentials in nonlinear polycrystals. Noting that these potentials are given as averages, several quadrature formulae are investigated to express these integrals of nonlinear functions of local fields in terms of the moments of these fields. Two of these quadrature formulae reduce to known schemes, including a recent proposition (Ponte Castañeda 2015 Proc. R. Soc. A 471 , 20150665 (doi:10.1098/rspa.2015.0665)) obtained by completely different means. Other formulae are also reviewed that make use of statistical information on the fields beyond their first and second moments. These quadrature formulae are applied to the estimation of effective potentials in polycrystals governed by two potentials, by means of a reduced-order model proposed by the authors (non-uniform transformation field analysis). It is shown how the quadrature formulae improve on the tangent second-order approximation in porous crystals at high stress triaxiality. It is found that, in order to retrieve a satisfactory accuracy for highly nonlinear porous crystals under high stress triaxiality, a quadrature formula of higher order is required.

  4. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  5. A second-order frequency-aided digital phase-locked loop for Doppler rate tracking

    NASA Astrophysics Data System (ADS)

    Chie, C. M.

    1980-08-01

    A second-order digital phase-locked loop (DPLL) has a finite lock range which is a function of the frequency of the incoming signal to be tracked. For this reason, it is not capable of tracking an input with Doppler rate for an indefinite period of time. In this correspondence, an analytical expression for the hold-in time is derived. In addition, an all-digital scheme to alleviate this problem is proposed based on the information obtained from estimating the input signal frequency.

  6. A 3D Kinematic Measurement of Knee Prosthesis Using X-ray Projection Images

    NASA Astrophysics Data System (ADS)

    Hirokawa, Shunji; Ariyoshi, Shogo; Hossain, Mohammad Abrar

    We have developed a technique for estimating 3D motion of knee prosthesis from its 2D perspective projections. As Fourier descriptors were used for compact representation of library templates and contours extracted from the prosthetic X-ray images, the entire silhouette contour of each prosthetic component was required. This caused such a problem as our algorithm did not function when the silhouettes of tibio and femoral components overlapped with each other. Here we planned a novel method to overcome it; which was processed in two steps. First, the missing part of silhouette contour due to overlap was interpolated using a free-formed curvature such as Bezier. Then the first step position/orientation estimation was performed. In the next step, a clipping window was set in the projective coordinate so as to separate the overlapped silhouette drawn using the first step estimates. After that the localized library whose templates were clipped in shape was prepared and the second step estimation was performed. Computer model simulation demonstrated sufficient accuracies of position/orientation estimation even for overlapped silhouettes; equivalent to those without overlap.

  7. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  8. Cloud computing and cloud security in China

    NASA Astrophysics Data System (ADS)

    Zhang, Shaohe; Jiang, Cuenyun; Wang, Ruxin

    2018-04-01

    We live in the data age. It's not easy to measure the total volume of data stored electronically, but an IDC estimate put the size of the "digital universe" at 0.18 zettabytes in 2006 and is forecasting a tenfold growth by 2011 to 1.8 zettabytes. A zettabyte is 1021 bytes, or equivalently one thousand exabytes, one million petabytes, or one billion terabytes. That's roughly the same order of magnitude as one disk drive for every person in the world.

  9. Interference Rejection and Management

    DTIC Science & Technology

    2009-07-01

    performance of a DS CDMA receiver. And it was shown in [34] that in order to successfully have a CDMA system overlay narrowband users, i.e., to deploy... CDMA transmitters and the CDMA receivers. 9.2.1.2 Multicarrier Direct Sequence In a multicarrier DS system, multiple narrowband DS waveforms, each at...1)] mmax(i−1) m=mmin(i−1) Detection of the (i−1)th path Pi,Di Pi−1,Di−1 channel estimator \\ data detector Fig. 9.1 Low-pass equivalent of the DS / CDMA

  10. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms

    PubMed Central

    Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796

  11. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms.

    PubMed

    Baldwin, Alex S; Baker, Daniel H; Hess, Robert F

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.

  12. Neutron equivalent doses and associated lifetime cancer incidence risks for head & neck and spinal proton therapy

    NASA Astrophysics Data System (ADS)

    Athar, Basit S.; Paganetti, Harald

    2009-08-01

    In this work we have simulated the absorbed equivalent doses to various organs distant to the field edge assuming proton therapy treatments of brain or spine lesions. We have used computational whole-body (gender-specific and age-dependent) voxel phantoms and considered six treatment fields with varying treatment volumes and depths. The maximum neutron equivalent dose to organs near the field edge was found to be approximately 8 mSv Gy-1. We were able to clearly demonstrate that organ-specific neutron equivalent doses are age (stature) dependent. For example, assuming an 8-year-old patient, the dose to brain from the spinal fields ranged from 0.04 to 0.10 mSv Gy-1, whereas the dose to the brain assuming a 9-month-old patient ranged from 0.5 to 1.0 mSv Gy-1. Further, as the field aperture opening increases, the secondary neutron equivalent dose caused by the treatment head decreases, while the secondary neutron equivalent dose caused by the patient itself increases. To interpret the dosimetric data, we analyzed second cancer incidence risks for various organs as a function of patient age and field size based on two risk models. The results show that, for example, in an 8-year-old female patient treated with a spinal proton therapy field, breasts, lungs and rectum have the highest radiation-induced lifetime cancer incidence risks. These are estimated to be 0.71%, 1.05% and 0.60%, respectively. For an 11-year-old male patient treated with a spinal field, bronchi and rectum show the highest risks of 0.32% and 0.43%, respectively. Risks for male and female patients increase as their age at treatment time decreases.

  13. Limitations to Teaching Children 2 + 2 = 4: Typical Arithmetic Problems Can Hinder Learning of Mathematical Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.

    2008-01-01

    Do typical arithmetic problems hinder learning of mathematical equivalence? Second and third graders (7-9 years old; N= 80) received lessons on mathematical equivalence either with or without typical arithmetic problems (e.g., 15 + 13 = 28 vs. 28 = 28, respectively). Children then solved math equivalence problems (e.g., 3 + 9 + 5 = 6 + __),…

  14. Double dissociation between first- and second-order processing.

    PubMed

    Allard, Rémy; Faubert, Jocelyn

    2007-04-01

    To study the difference of sensitivity to luminance- (LM) and contrast-modulated (CM) stimuli, we compared LM and CM detection thresholds in LM- and CM-noise conditions. The results showed a double dissociation (no or little inter-attribute interaction) between the processing of these stimuli, which implies that both stimuli must be processed, at least at some point, by separate mechanisms and that both stimuli are not merged after a rectification process. A second experiment showed that the internal equivalent noise limiting the CM sensitivity was greater than the one limiting the carrier sensitivity, which suggests that the internal noise occurring before the rectification process is not limiting the CM sensitivity. These results support the hypothesis that a suboptimal rectification process partially explains the difference of LM and CM sensitivity.

  15. An adjoint-based simultaneous estimation method of the asthenosphere's viscosity and afterslip using a fast and scalable finite-element adjoint solver

    NASA Astrophysics Data System (ADS)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo

    2018-04-01

    The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.

  16. Quenching rate for a nonlocal problem arising in the micro-electro mechanical system

    NASA Astrophysics Data System (ADS)

    Guo, Jong-Shenq; Hu, Bei

    2018-03-01

    In this paper, we study the quenching rate of the solution for a nonlocal parabolic problem which arises in the study of the micro-electro mechanical system. This question is equivalent to the stabilization of the solution to the transformed problem in self-similar variables. First, some a priori estimates are provided. In order to construct a Lyapunov function, due to the lack of time monotonicity property, we then derive some very useful and challenging estimates by a delicate analysis. Finally, with this Lyapunov function, we prove that the quenching rate is self-similar which is the same as the problem without the nonlocal term, except the constant limit depends on the solution itself.

  17. Inferring river properties with SWOT like data

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre-André; Monnier, Jérôme; Roux, Hélène

    2014-05-01

    Inverse problems in hydraulics are still open questions such as the estimation of river discharges. Remotely sensed measurements of hydrosystems can provide valuable information but adequate methods are still required to exploit it. The future Surface Water and Ocean Topography (SWOT) mission would provide new cartographic measurements of inland water surfaces. The highlight of SWOT will be its almost global coverage and temporal revisits on the order of 1 to 4 times per 22 days repeat cycle [1]. Lots of studies have shown the possibility of retrieving discharge given the river bathymetry or roughness and/or in situ time series. The new challenge is to use SWOT type data to inverse the triplet formed by the roughness, the bathymetry and the discharge. The method presented here is composed of two steps: following an inverse formulation from [2], the first step consists in retrieving an equivalent bathymetry profile of a river given one in situ depth measurement and SWOT like data of the water surface, that is to say water elevation, free surface slope and width. From this equivalent bathymetry, the second step consists in solving mass and Manning equation in the least square sense [3]. Nevertheless, for cases where no in situ measurement of water depth is available, it is still possible to solve a system formed by mass and Manning equations in the least square sense (or with other methods such as Bayesian ones, see e.g. [4]). We show that a good a priori knowledge of bathymetry and roughness is compulsory for such methods. Depending on this a priori knowledge, the inversion of the triplet (roughness, bathymetry, discharge) in SWOT context was evaluated on the Garonne River [5, 6]. The results are presented on 80 km of the Garonne River downstream of Toulouse in France [7]. An equivalent bathymetry is retrieved with less than 10% relative error with SWOT like observations. After that, encouraging results are obtained with less than 10% relative error on the identified discharge. References [1] E. Rodriguez, SWOT science requirements document, JPL document, JPL, 2012. [2] A. Gessese, K. Wa, and M. Sellier, Bathymetry reconstruction based on the zero-inertia shallow water approximation, Theoretical and Computational Fluid Dynamics, vol. 27, no. 5, pp. 721-732, 2013. [3] P. A. Garambois and J. Monnier, Inference of river properties from remotly sensed observations of water surface, under final redaction for HESS, 2014. [4] M. Durand, Sacramento river airswot discharge estimation scenario. http://swotdawg.wordpress.com/2013/04/18/sacramento-river-airswot-discharge-estimation-scenario/, 2013. [5] P. A. Garambois and H. Roux, Garonne River discharge estimation. http://swotdawg.wordpress.com/2013/07/01/garonne-river-discharge-estimation/, 2013. [6] P. A. Garambois and H. Roux, Sensitivity of discharge uncertainty to measurement errors, case of the Garonne River. http://swotdawg.wordpress.com/2013/07/01/sensitivity-of-discharge-uncertainty-to-measurement-errors-case-of-the-garonne-river/, 2013. [7] H. Roux and P. A. Garambois, Tests of reach averaging and manning equation on the Garonne River. http://swotdawg.wordpress.com/2013/07/01/tests-of-reach-averaging-and-manning-equation-on-the-garonne-river/, 2013.

  18. Assessing Measurement Equivalence in Ordered-Categorical Data

    ERIC Educational Resources Information Center

    Elosua, Paula

    2011-01-01

    Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…

  19. Steady-State Modeling of Modular Multilevel Converter Under Unbalanced Grid Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Xiaojie M.; Wang, Zhiqiang; Liu, Bo

    This paper presents a steady-state model of MMC for the second-order phase voltage ripple prediction under unbalanced conditions, taking the impact of negative-sequence current control into account. From the steady-state model, a circular relationship is found among current and voltage quantities, which can be used to evaluate the magnitudes and initial phase angles of different circulating current components. Moreover, in order to calculate the circulating current in a point-to-point MMC-based HVdc system under unbalanced grid conditions, the derivation of equivalent dc impedance of an MMC is discussed as well. According to the dc impedance model, an MMC inverter can bemore » represented as a series connected R-L-C branch, with its equivalent resistance and capacitance directly related to the circulating current control parameters. Experimental results from a scaled-down three-phase MMC system under an emulated single-line-to-ground fault are provided to support the theoretical analysis and derived model. In conclusion, this new models provides an insight into the impact of different control schemes on the fault characteristics and improves the understanding of the operation of MMC under unbalanced conditions.« less

  20. Steady-State Modeling of Modular Multilevel Converter Under Unbalanced Grid Conditions

    DOE PAGES

    Shi, Xiaojie M.; Wang, Zhiqiang; Liu, Bo; ...

    2016-11-16

    This paper presents a steady-state model of MMC for the second-order phase voltage ripple prediction under unbalanced conditions, taking the impact of negative-sequence current control into account. From the steady-state model, a circular relationship is found among current and voltage quantities, which can be used to evaluate the magnitudes and initial phase angles of different circulating current components. Moreover, in order to calculate the circulating current in a point-to-point MMC-based HVdc system under unbalanced grid conditions, the derivation of equivalent dc impedance of an MMC is discussed as well. According to the dc impedance model, an MMC inverter can bemore » represented as a series connected R-L-C branch, with its equivalent resistance and capacitance directly related to the circulating current control parameters. Experimental results from a scaled-down three-phase MMC system under an emulated single-line-to-ground fault are provided to support the theoretical analysis and derived model. In conclusion, this new models provides an insight into the impact of different control schemes on the fault characteristics and improves the understanding of the operation of MMC under unbalanced conditions.« less

  1. Tuning algorithms for fractional order internal model controllers for time delay processes

    NASA Astrophysics Data System (ADS)

    Muresan, Cristina I.; Dutta, Abhishek; Dulf, Eva H.; Pinar, Zehra; Maxim, Anca; Ionescu, Clara M.

    2016-03-01

    This paper presents two tuning algorithms for fractional-order internal model control (IMC) controllers for time delay processes. The two tuning algorithms are based on two specific closed-loop control configurations: the IMC control structure and the Smith predictor structure. In the latter, the equivalency between IMC and Smith predictor control structures is used to tune a fractional-order IMC controller as the primary controller of the Smith predictor structure. Fractional-order IMC controllers are designed in both cases in order to enhance the closed-loop performance and robustness of classical integer order IMC controllers. The tuning procedures are exemplified for both single-input-single-output as well as multivariable processes, described by first-order and second-order transfer functions with time delays. Different numerical examples are provided, including a general multivariable time delay process. Integer order IMC controllers are designed in each case, as well as fractional-order IMC controllers. The simulation results show that the proposed fractional-order IMC controller ensures an increased robustness to modelling uncertainties. Experimental results are also provided, for the design of a multivariable fractional-order IMC controller in a Smith predictor structure for a quadruple-tank system.

  2. Estimation of coefficients and boundary parameters in hyperbolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Murphy, K. A.

    1984-01-01

    Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.

  3. Benefits estimation framework for automated vehicle operations.

    DOT National Transportation Integrated Search

    2015-08-01

    Automated vehicles have the potential to bring about transformative safety, mobility, energy, and environmental benefits to the surface transportation system. They are also being introduced into a complex transportation system, where second-order imp...

  4. Structural and molecular conformation of myosin in intact muscle fibers by second harmonic generation

    NASA Astrophysics Data System (ADS)

    Nucciotti, V.; Stringari, C.; Sacconi, L.; Vanzi, F.; Linari, M.; Piazzesi, G.; Lombardi, V.; Pavone, F. S.

    2009-02-01

    Recently, the use of Second Harmonic Generation (SHG) for imaging biological samples has been explored with regard to intrinsic SHG in highly ordered biological samples. As shown by fractional extraction of proteins, myosin is the source of SHG signal in skeletal muscle. SHG is highly dependent on symmetries and provides selective information on the structural order and orientation of the emitting proteins and the dynamics of myosin molecules responsible for the mechano-chemical transduction during contraction. We characterise the polarization-dependence of SHG intensity in three different physiological states: resting, rigor and isometric tetanic contraction in a sarcomere length range between 2.0 μm and 4.0 μm. The orientation of motor domains of the myosin molecules is dependent on their physiological states and modulate the SHG signal. We can discriminate the orientation of the emitting dipoles in four different molecular conformations of myosin heads in intact fibers during isometric contraction, in resting and rigor. We estimate the contribution of the myosin motor domain to the total second order bulk susceptibility from its molecular structure and its functional conformation. We demonstrate that SHG is sensitive to the fraction of ordered myosin heads by disrupting the order of myosin heads in rigor with an ATP analog. We estimate the fraction of myosin motors generating the isometric force in the active muscle fiber from the dependence of the SHG modulation on the degree of overlap between actin and myosin filaments during an isometric contraction.

  5. Study of iron-borate materials systems processed in space

    NASA Technical Reports Server (NTRS)

    Neilson, G. F.

    1978-01-01

    It was calculated that an FeBO3B2O3 glass-ceramic containing only 1 mole% FeBO3 would be equivalent for magnetooptic application to a YIG crystal of equal thickness. An Fe2O3B2O3 composition containing 2 mole% FeBO3 equivalent (98B) could be converted largely to a dense green, though opaque, FeBO3 glass-ceramic through suitable heat treatments. However, phase separation (and segregation) and Fe+3 reduction could not be entirely avoided with the various procedures that were employed. From light scattering calculations, it was estimated that about 100 A to allow 90% light transmission through a 1 cm thick sample. However, the actual FeBO3 crystallite sizes obtained in 98B were of the order of 1 micron or greater.

  6. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.

    2017-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  7. Estimating continuous floodplain and major river bed topography mixing ordinal coutour lines and topographic points

    NASA Astrophysics Data System (ADS)

    Brown, T. G.; Lespez, L.; Sear, D. A.; Houben, P.; Klimek, K.

    2016-12-01

    Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.

  8. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples.

    PubMed

    Sivaganesan, Mano; Siefring, Shawn; Varma, Manju; Haugland, Richard A

    2011-12-01

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from different studies by this approach, either a consistent source of calibrator cells must be used or the estimates must account for any differences in target sequence recoveries from different sources of calibrator cells. In this report we describe two methods for estimating target sequence recoveries from whole cell calibrator samples based on qPCR analyses of their serially diluted DNA extracts and most probable number (MPN) calculation. The first method employed a traditional MPN calculation approach. The second method employed a Bayesian hierarchical statistical modeling approach and a Monte Carlo Markov Chain (MCMC) simulation method to account for the uncertainty in these estimates associated with different individual samples of the cell preparations, different dilutions of the DNA extracts and different qPCR analytical runs. The two methods were applied to estimate mean target sequence recoveries per cell from two different lots of a commercially available source of enumerated Enterococcus cell preparations. The mean target sequence recovery estimates (and standard errors) per cell from Lot A and B cell preparations by the Bayesian method were 22.73 (3.4) and 11.76 (2.4), respectively, when the data were adjusted for potential false positive results. Means were similar for the traditional MPN approach which cannot comparably assess uncertainty in the estimates. Cell numbers and estimates of recoverable target sequences in calibrator samples prepared from the two cell sources were also used to estimate cell equivalent and target sequence quantities recovered from surface water samples in a comparative Ct method. Our results illustrate the utility of the Bayesian method in accounting for uncertainty, the high degree of precision attainable by the MPN approach and the need to account for the differences in target sequence recoveries from different calibrator sample cell sources when they are used in the comparative Ct method. Published by Elsevier B.V.

  9. A dual-input nonlinear system analysis of autonomic modulation of heart rate

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Mullen, T. J.; Cohen, R. J.

    1996-01-01

    Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.

  10. Numerical scheme approximating solution and parameters in a beam equation

    NASA Astrophysics Data System (ADS)

    Ferdinand, Robert R.

    2003-12-01

    We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.

  11. Assessing Knowledge of Mathematical Equivalence: A Construct-Modeling Approach

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Matthews, Percival G.; Taylor, Roger S.; McEldoon, Katherine L.

    2011-01-01

    Knowledge of mathematical equivalence, the principle that 2 sides of an equation represent the same value, is a foundational concept in algebra, and this knowledge develops throughout elementary and middle school. Using a construct-modeling approach, we developed an assessment of equivalence knowledge. Second through sixth graders (N = 175)…

  12. Radiotherapy for stage I seminoma of the testis: Organ equivalent dose to partially in-field structures and second cancer risk estimates on the basis of a mechanistic, bell-shaped, and plateau model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazonakis, Michalis, E-mail: mazonak@med.uoc.gr; Damilakis, John; Varveris, Charalambos

    Purpose: The aim of the current study was to (a) calculate the organ equivalent dose (OED) and (b) estimate the associated second cancer risk to partially in-field critical structures from adjuvant radiotherapy for stage I seminoma of the testis on the basis of three different nonlinear risk models. Methods: Three-dimensional plans were created for twelve patients who underwent a treatment planning computed tomography of the abdomen. The plans for irradiation of seminoma consisted of para-aortic anteroposterior and posteroanterior fields giving 20 Gy to the target site with 6 MV photons. The OED of stomach, colon, liver, pancreas, and kidneys, thatmore » were partially included in the treatment volume, was calculated using differential dose–volume histograms. The mechanistic, bell-shaped, and plateau models were employed for these calculations provided that organ-specific parameters were available for the subsequent assessment of the excess absolute risk (EAR) for second cancer development. The estimated organ-specific lifetime risks were compared with the respective nominal intrinsic probabilities for cancer induction. Results: The mean OED, which was calculated from the patients’ treatment plans, varied from 0.54 to 6.61 Gy by the partially in-field organ of interest and the model used for dosimetric calculations. The difference between the OED of liver derived from the mechanistic model with those from the bell-shaped and plateau models was less than 1.8%. An even smaller deviation of 1.0% was observed for colon. For the rest organs of interest, the differences between the OED values obtained by the examined models varied from 8.6% to 50.0%. The EAR for stomach, colon, liver, pancreas, and kidney cancer induction at an age of 70 yr because of treatment of a typical 39-yr-old individual was up to 4.24, 11.39, 0.91, 3.04, and 0.14 per 10 000 persons-yr, respectively. Patient’s irradiation was found to elevate the lifetime intrinsic risks by 8.3%–63.0% depending upon the organ of interest and the model employed for risk analysis. Conclusions: Radiotherapy for stage I seminoma of the testis may result in an excess risk for the appearance of secondary malignancies in partially in-field organs. The organ- and model-dependent second cancer risk assessments of this study may be of value for patient counseling and follow-up.« less

  13. Exploiting the dynamics of S-phase tracers in developing brain: interkinetic nuclear migration for cells entering versus leaving the S-phase

    NASA Technical Reports Server (NTRS)

    Hayes, N. L.; Nowakowski, R. S.

    2000-01-01

    Two S-phase markers for in vivo studies of cell proliferation in the developing central nervous system, tritiated thymidine ((3)H-TdR) and bromodeoxyuridine (BUdR), were compared using double-labeling techniques in the developing mouse cortex at embryonic day 14 (E14). The labeling efficiencies and detectability of the two tracers were approximately equivalent, and there was no evidence of significant tracer interactions that depend on order of administration. For both tracers, the loading time needed to label an S-phase cell to detectability is estimated at <0.2 h shortly after the injection of the label, but, as the concentration of the label falls, it increases to approximately 0.65 h after about 30 min. Thereafter, cells that enter the S-phase continue to become detectably labeled for approximately 5-6 h. The approximate equivalence of these two tracers was exploited to observe directly the numbers and positions of nuclei entering (labeled with the second tracer only) and leaving (labeled with the first tracer only) the S-phase. As expected, the numbers of nuclei entering and leaving the S-phase both increased as the interval between the two injections lengthened. Also, nuclei leaving the S-phase rapidly move towards the ventricular surface during G2, but, unexpectedly, the distribution of the entering nuclei does not differ significantly from the distribution of the nuclei in the S-phase. This indicates that: (1) the extent and rate of abventricular nuclear movement during G1 is variable, such that not all nuclei traverse the entire width of the ventricular zone, and (2) interkinetic nuclear movements are minimal during S-phase. Copyright 2000 S. Karger AG, Basel.

  14. Evaluation of the removal of antiestrogens and antiandrogens via ozone and granular activated carbon using bioassay and fluorescent spectroscopy.

    PubMed

    Ma, Dehua; Chen, Lujun; Wu, Yuchao; Liu, Rui

    2016-06-01

    Antiestrogens and antiandrogens are relatively rarely studied endocrine disrupting chemicals which can be found in un/treated wastewaters. Antiestrogens and antiandrogens in the wastewater treatment effluents could contribute to sexual disruption of organisms. In this study, to assess the removal of non-specific antiestrogens and antiandrogens by advanced treatment processes, ozonation and adsorption to granular activated carbon (GAC), the biological activities and excitation emission matrix fluorescence spectroscopy of wastewater were evaluated. As the applied ozone dose increased to 12 mg/L, the antiestrogenic activity dramatically decreased to 3.2 μg 4-hydroxytamoxifen equivalent (4HEQ)/L, with a removal efficiency of 84.8%, while the antiandrogenic activity was 23.1 μg flutamide equivalent (FEQ)/L, with a removal efficiency of 75.5%. The removal of antiestrogenic/antiandrogenic activity has high correlation with the removal of fulvic acid-like materials and humic acid-like organics, suggesting that they can be used as surrogates for antiestrogenic/antiandrogenic activity during ozonation. The adsorption kinetics of antiestrogenic activity and antiandrogenic activity were well described by pseudo-second-order kinetics models. The estimated equilibrium concentration of antiestrogenic activity is 7.9 μg 4HEQ/L with an effective removal efficiency of 70.5%, while the equilibrium concentration of antiandrogenic activity is 33.7 μg FEQ/L with a removal efficiency of 67.0%. Biological activity evaluation of wastewater effluents is an attractive way to assess the removal of endocrine disrupting chemicals by different treatment processes. Fluorescence spectroscopy can be used as a surrogate measure of bioassays during ozonation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Liouvillian integrability of gravitating static isothermal fluid spheres

    NASA Astrophysics Data System (ADS)

    Iacono, Roberto; Llibre, Jaume

    2014-10-01

    We examine the integrability properties of the Einstein field equations for static, spherically symmetric fluid spheres, complemented with an isothermal equation of state, ρ = np. In this case, Einstein's equations can be reduced to a nonlinear, autonomous second order ordinary differential equation (ODE) for m/R (m is the mass inside the radius R) that has been solved analytically only for n = -1 and n = -3, yielding the cosmological solutions by De Sitter and Einstein, respectively, and for n = -5, case for which the solution can be derived from the De Sitter's one using a symmetry of Einstein's equations. The solutions for these three cases are of Liouvillian type, since they can be expressed in terms of elementary functions. Here, we address the question of whether Liouvillian solutions can be obtained for other values of n. To do so, we transform the second order equation into an equivalent autonomous Lotka-Volterra quadratic polynomial differential system in {R}^2, and characterize the Liouvillian integrability of this system using Darboux theory. We find that the Lotka-Volterra system possesses Liouvillian first integrals for n = -1, -3, -5, which descend from the existence of invariant algebraic curves of degree one, and for n = -6, a new solvable case, associated to an invariant algebraic curve of higher degree (second). For any other value of n, eventual first integrals of the Lotka-Volterra system, and consequently of the second order ODE for the mass function must be non-Liouvillian. This makes the existence of other solutions of the isothermal fluid sphere problem with a Liouvillian metric quite unlikely.

  16. Space radiation dosimetry in low-Earth orbit and beyond.

    PubMed

    Benton, E R; Benton, E V

    2001-09-01

    Space radiation dosimetry presents one of the greatest challenges in the discipline of radiation protection. This is a result of both the highly complex nature of the radiation fields encountered in low-Earth orbit (LEO) and interplanetary space and of the constraints imposed by spaceflight on instrument design. This paper reviews the sources and composition of the space radiation environment in LEO as well as beyond the Earth's magnetosphere. A review of much of the dosimetric data that have been gathered over the last four decades of human space flight is presented. The different factors affecting the radiation exposures of astronauts and cosmonauts aboard the International Space Station (ISS) are emphasized. Measurements made aboard the Mir Orbital Station have highlighted the importance of both secondary particle production within the structure of spacecraft and the effect of shielding on both crew dose and dose equivalent. Roughly half the dose on ISS is expected to come from trapped protons and half from galactic cosmic rays (GCRs). The dearth of neutron measurements aboard LEO spacecraft and the difficulty inherent in making such measurements have led to large uncertainties in estimates of the neutron contribution to total dose equivalent. Except for a limited number of measurements made aboard the Apollo lunar missions, no crew dosimetry has been conducted beyond the Earth's magnetosphere. At the present time we are forced to rely on model-based estimates of crew dose and dose equivalent when planning for interplanetary missions, such as a mission to Mars. While space crews in LEO are unlikely to exceed the exposure limits recommended by such groups as the NCRP, dose equivalents of the same order as the recommended limits are likely over the course of a human mission to Mars. c2001 Elsevier Science B.V. All rights reserved.

  17. Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches

    NASA Astrophysics Data System (ADS)

    Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano

    2014-12-01

    The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.

  18. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  19. The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.

    ERIC Educational Resources Information Center

    Thomas, Neal

    Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…

  20. Identification of transmissivity fields using a Bayesian strategy and perturbative approach

    NASA Astrophysics Data System (ADS)

    Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.

    2017-10-01

    The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.

  1. The spectral sensitivity of the human short-wavelength sensitive cones derived from thresholds and color matches.

    PubMed

    Stockman, A; Sharpe, L T; Fach, C

    1999-08-01

    We used two methods to estimate short-wave (S) cone spectral sensitivity. Firstly, we measured S-cone thresholds centrally and peripherally in five trichromats, and in three blue-cone monochromats, who lack functioning middle-wave (M) and long-wave (L) cones. Secondly, we analyzed standard color-matching data. Both methods yielded equivalent results, on the basis of which we propose new S-cone spectral sensitivity functions. At short and middle-wavelengths, our measurements are consistent with the color matching data of Stiles and Burch (1955, Optica Acta, 2, 168-181; 1959, Optica Acta, 6, 1-26), and other psychophysically measured functions, such as pi 3 (Stiles, 1953, Coloquio sobre problemas opticos de la vision, 1, 65-103). At longer wavelengths, S-cone sensitivity has previously been over-estimated.

  2. Surface changes of enamel after brushing with charcoal toothpaste

    NASA Astrophysics Data System (ADS)

    Pertiwi, U. I.; Eriwati, Y. K.; Irawan, B.

    2017-08-01

    The aim of this study was to determine the surface roughness changes of tooth enamel after brushing with charcoal toothpaste. Thirty specimens were brushed using distilled water (the first group), Strong® Formula toothpaste (the second group), and Charcoal® Formula toothpaste for four minutes and 40 seconds (equivalent to one month) and for 14 minutes (equivalent to three months) using a soft fleece toothbrush with a mass of 150 gr. The roughness was measured using a surface roughness tester, and the results were tested with repeated ANOVA test and one-way ANOVA. The value of the surface roughness of tooth enamel was significantly different (p<0.05) after brushing for an equivalent of one month and an equivalent of three months. Using toothpaste containing charcoal can increase the surface roughness of tooth enamel.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Enrique; Anero, Jesus; Gonzalez-Martin, Sergio, E-mail: enrique.alvarez@uam.es, E-mail: jesusanero@gmail.com, E-mail: sergio.gonzalez.martin@uam.es

    We consider the most general action for gravity which is quadratic in curvature. In this case first order and second order formalisms are not equivalent. This framework is a good candidate for a unitary and renormalizable theory of the gravitational field; in particular, there are no propagators falling down faster than 1/ p {sup 2}. The drawback is of course that the parameter space of the theory is too big, so that in many cases will be far away from a theory of gravity alone. In order to analyze this issue, the interaction between external sources was examined in somemore » detail. We find that this interaction is conveyed mainly by propagation of the three-index connection field. At any rate the theory as it stands is in the conformal invariant phase; only when Weyl invariance is broken through the coupling to matter can an Einstein-Hilbert term (and its corresponding Planck mass scale) be generated by quantum corrections.« less

  4. Comparison of a field-based test to estimate functional threshold power and power output at lactate threshold.

    PubMed

    Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C

    2012-02-01

    It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).

  5. Molecular extended thermodynamics of rarefied polyatomic gases and wave velocities for increasing number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arima, Takashi, E-mail: tks@stat.nitech.ac.jp; Mentrelli, Andrea, E-mail: andrea.mentrelli@unibo.it; Ruggeri, Tommaso, E-mail: tommaso.ruggeri@unibo.it

    Molecular extended thermodynamics of rarefied polyatomic gases is characterized by two hierarchies of equations for moments of a suitable distribution function in which the internal degrees of freedom of a molecule is taken into account. On the basis of physical relevance the truncation orders of the two hierarchies are proven to be not independent on each other, and the closure procedures based on the maximum entropy principle (MEP) and on the entropy principle (EP) are proven to be equivalent. The characteristic velocities of the emerging hyperbolic system of differential equations are compared to those obtained for monatomic gases and themore » lower bound estimate for the maximum equilibrium characteristic velocity established for monatomic gases (characterized by only one hierarchy for moments with truncation order of moments N) by Boillat and Ruggeri (1997) (λ{sub (N)}{sup E,max})/(c{sub 0}) ⩾√(6/5 (N−1/2 )),(c{sub 0}=√(5/3 k/m T)) is proven to hold also for rarefied polyatomic gases independently from the degrees of freedom of a molecule. -- Highlights: •Molecular extended thermodynamics of rarefied polyatomic gases is studied. •The relation between two hierarchies of equations for moments is derived. •The equivalence of maximum entropy principle and entropy principle is proven. •The characteristic velocities are compared to those of monatomic gases. •The lower bound of the maximum characteristic velocity is estimated.« less

  6. Time series of low-degree geopotential coefficients from SLR data: estimation of Earth's figure axis and LOD variations

    NASA Astrophysics Data System (ADS)

    Luceri, V.; Sciarretta, C.; Bianco, G.

    2012-12-01

    The redistribution of the mass within the earth system induces changes in the Earth's gravity field. In particular, the second-degree geopotential coefficients reflect the behaviour of the Earth's inertia tensor of order 2, describing the main mass variations of our planet impacting the EOPs. Thanks to the long record of accurate and continuous laser ranging observations to Lageos and other geodetic satellites, SLR is the only current space technique capable to monitor the long time variability of the Earth's gravity field with adequate accuracy. Time series of low-degree geopotential coefficients are estimated with our analysis of SLR data (spanning more than 25 years) from several geodetic satellites in order to detect trends and periodic variations related to tidal effects and atmospheric/oceanic mass variations. This study is focused on the variations of the second-degree Stokes coefficients related to the Earth's principal figure axis and oblateness: C21, S21 and C20. On the other hand, surface mass load variations induce excitations in the EOPs that are proportional to the same second-degree coefficients. The time series of direct estimates of low degree geopotential and those derived from the EOP excitation functions are compared and presented together with their time and frequency analysis.

  7. Efficient Analysis of Complex Structures

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.

    2000-01-01

    Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).

  8. Assessment of doses caused by electrons in thin layers of tissue-equivalent materials, using MCNP.

    PubMed

    Heide, Bernd

    2013-10-01

    Absorbed doses caused by electron irradiation were calculated with Monte Carlo N-Particle transport code (MCNP) for thin layers of tissue-equivalent materials. The layers were so thin that the calculation of energy deposition was on the border of the scope of MCNP. Therefore, in this article application of three different methods of calculation of energy deposition is discussed. This was done by means of two scenarios: in the first one, electrons were emitted from the centre of a sphere of water and also recorded in that sphere; and in the second, an irradiation with the PTB Secondary Standard BSS2 was modelled, where electrons were emitted from an (90)Sr/(90)Y area source and recorded inside a cuboid phantom made of tissue-equivalent material. The speed and accuracy of the different methods were of interest. While a significant difference in accuracy was visible for one method in the first scenario, the difference in accuracy of the three methods was insignificant for the second one. Considerable differences in speed were found for both scenarios. In order to demonstrate the need for calculating the dose in thin small zones, a third scenario was constructed and simulated as well. The third scenario was nearly equal to the second one, but a pike of lead was assumed to be inside the phantom in addition. A dose enhancement (caused by the pike of lead) of ∼113 % was recorded for a thin hollow cylinder at a depth of 0.007 cm, which the basal-skin layer is referred to in particular. Dose enhancements between 68 and 88 % were found for a slab with a radius of 0.09 cm for all depths. All dose enhancements were hardly noticeable for a slab with a cross-sectional area of 1 cm(2), which is usually applied to operational radiation protection.

  9. Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.

    PubMed

    Zhang, Xiangjun; Wu, Xiaolin

    2008-06-01

    The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.

  10. Temporal analysis of the October 1989 proton flare using computerized anatomical models

    NASA Technical Reports Server (NTRS)

    Simonsen, L. C.; Cucinotta, F. A.; Atwell, W.; Nealy, J. E.

    1993-01-01

    The GOES-7 time history data of hourly averaged integral proton fluxes at various particle kinetic energies are analyzed for the solar proton event that occurred between October 19 and 29, 1989. By analyzing the time history data, the dose rates which may vary over many orders of magnitude in the early phases of the flare can be estimated as well as the cumulative dose as a function of time. Basic transport calculations are coupled with detailed body organ thickness distributions from computerized anatomical models to estimate dose rates and cumulative doses to 20 critical body organs. For a 5-cm-thick water shield, cumulative skin, eye, and blood-forming-organ dose equivalents of 1.27, 1.23, and 0.41 Sv, respectively, are estimated. These results are approximately 40-50 percent less than the widely used 0- and 5-cm slab dose estimates. The risk of cancer incidence and mortality are also estimated for astronauts protected by various water shield thicknesses.

  11. Simulating the Snow Water Equivalent and its changing pattern over Nepal

    NASA Astrophysics Data System (ADS)

    Niroula, S.; Joseph, J.; Ghosh, S.

    2016-12-01

    Snow fall in the Himalayan region is one of the primary sources of fresh water, which accounts around 10% of total precipitation of Nepal. Snow water is an intricate variable in terms of its global and regional estimates whose complexity is favored by spatial variability linked with rugged topography. The study is primarily focused on simulation of Snow Water Equivalent (SWE) by the use of a macroscale hydrologic model, Variable Infiltration Capacity (VIC). As whole Nepal including its Himalayas lies under the catchment of Ganga River in India, contributing at least 40% of annual discharge of Ganges, this model was run in the entire watershed that covers part of Tibet and Bangladesh as well. Meteorological inputs for 29 years (1979-2007) are drawn from ERA-INTERIM and APHRODITE dataset for horizontal resolution of 0.25 degrees. The analysis was performed to study temporal variability of SWE in the Himalayan region of Nepal. The model was calibrated by observed stream flows of the tributaries of the Gandaki River in Nepal which ultimately feeds river Ganga. Further, the simulated SWE is used to estimate stream flow in this river basin. Since Nepal has a greater snow cover accumulation in monsoon season than in winter at high altitudes, seasonality fluctuations in SWE affecting the stream flows are known. The model provided fair estimates of SWE and stream flow as per statistical analysis. Stream flows are known to be sensitive to the changes in snow water that can bring a negative impact on power generation in a country which has huge hydroelectric potential. In addition, our results on simulated SWE in second largest snow-fed catchment of the country will be helpful for reservoir management, flood forecasting and other water resource management issues. Keywords: Hydrology, Snow Water Equivalent, Variable Infiltration Capacity, Gandaki River Basin, Stream Flow

  12. Fish assemblage production estimates in Appalachian streams across a latitudinal and temperature gradient

    Treesearch

    Bonnie J.E. Myers; C. Andrew Dolloff; Jackson R. Webster; Keith H. Nislow; Brandon Fair; Andrew L. Rypel

    2017-01-01

    Production of biomass is central to the ecology and sustainability of fish assemblages. The goal of this study was to empirically estimate and compare fish assemblage production, production-to-biomass (P/B) ratios and species composition for 25 second- to third-order streams spanning the Appalachian Mountains (from Vermont to North Carolina) that vary in their...

  13. Non-Data Aided Doppler Shift Estimation for Underwater Acoustic Communication

    DTIC Science & Technology

    2014-05-01

    in underwater acoustic wireless sensor networks . We analyzed the data collected from our experiments using non-data aided (blind) techniques such as...investigated different methods for blind Doppler shift estimation and compensation for a single carrier in underwater acoustic wireless sensor ...distributed underwater sensor networks . Detailed experimental and simulated results based on second order cyclostationary features of the received signals

  14. Genetic analysis of partial egg production records in Japanese quail using random regression models.

    PubMed

    Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A

    2017-08-01

    The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P < 0.05) linear contrast estimates. Significant (P < 0.05) estimates of covariate effect (age at sexual maturity) showed a decreased pattern with greater impact on egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.

  15. Biological equivalence between LDR and PDR in cervical cancer: multifactor analysis using the linear-quadratic model.

    PubMed

    Couto, José Guilherme; Bravo, Isabel; Pirraco, Rui

    2011-09-01

    The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD(2)) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD(2) for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose.

  16. Biological equivalence between LDR and PDR in cervical cancer: multifactor analysis using the linear-quadratic model

    PubMed Central

    Bravo, Isabel; Pirraco, Rui

    2011-01-01

    Purpose The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. Material and methods In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD2) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). Results In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD2 for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. Conclusions A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose. PMID:23346123

  17. Attenuation analysis of real GPR wavelets: The equivalent amplitude spectrum (EAS)

    NASA Astrophysics Data System (ADS)

    Economou, Nikos; Kritikakis, George

    2016-03-01

    Absorption of a Ground Penetrating Radar (GPR) pulse is a frequency dependent attenuation mechanism which causes a spectral shift on the dominant frequency of GPR data. Both energy variation of GPR amplitude spectrum and spectral shift were used for the estimation of Quality Factor (Q*) and subsequently the characterization of the subsurface material properties. The variation of the amplitude spectrum energy has been studied by Spectral Ratio (SR) method and the frequency shift by the estimation of the Frequency Centroid Shift (FCS) or the Frequency Peak Shift (FPS) methods. The FPS method is more automatic, less robust. This work aims to increase the robustness of the FPS method by fitting a part of the amplitude spectrum of GPR data with Ricker, Gaussian, Sigmoid-Gaussian or Ricker-Gaussian functions. These functions fit different parts of the spectrum of a GPR reference wavelet and the Equivalent Amplitude Spectrum (EAS) is selected, reproducing Q* values used in forward Q* modeling analysis. Then, only the peak frequencies and the time differences between the reference wavelet and the subsequent reflected wavelets are used to estimate Q*. As long as the EAS is estimated, it is used for Q* evaluation in all the GPR section, under the assumption that the selected reference wavelet is representative. De-phasing and constant phase shift, for obtaining symmetrical wavelets, proved useful in the sufficiency of the horizons picking. Synthetic, experimental and real GPR data were examined in order to demonstrate the effectiveness of the proposed methodology.

  18. Constraints on the rupture process of the 17 August 1999 Izmit earthquake

    NASA Astrophysics Data System (ADS)

    Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.

    2003-04-01

    Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.

  19. Dynamic RSA: Examining parasympathetic regulatory dynamics via vector-autoregressive modeling of time-varying RSA and heart period.

    PubMed

    Fisher, Aaron J; Reeves, Jonathan W; Chi, Cyrus

    2016-07-01

    Expanding on recently published methods, the current study presents an approach to estimating the dynamic, regulatory effect of the parasympathetic nervous system on heart period on a moment-to-moment basis. We estimated second-to-second variation in respiratory sinus arrhythmia (RSA) in order to estimate the contemporaneous and time-lagged relationships among RSA, interbeat interval (IBI), and respiration rate via vector autoregression. Moreover, we modeled these relationships at lags of 1 s to 10 s, in order to evaluate the optimal latency for estimating dynamic RSA effects. The IBI (t) on RSA (t-n) regression parameter was extracted from individual models as an operationalization of the regulatory effect of RSA on IBI-referred to as dynamic RSA (dRSA). Dynamic RSA positively correlated with standard averages of heart rate and negatively correlated with standard averages of RSA. We propose that dRSA reflects the active downregulation of heart period by the parasympathetic nervous system and thus represents a novel metric that provides incremental validity in the measurement of autonomic cardiac control-specifically, a method by which parasympathetic regulatory effects can be measured in process. © 2016 Society for Psychophysiological Research.

  20. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy

    NASA Astrophysics Data System (ADS)

    Hälg, R. A.; Besserer, J.; Boschung, M.; Mayer, S.; Lomax, A. J.; Schneider, U.

    2014-05-01

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  1. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy.

    PubMed

    Hälg, R A; Besserer, J; Boschung, M; Mayer, S; Lomax, A J; Schneider, U

    2014-05-21

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  2. The influence of planetary-wave transience on horizontal air motions in the stratosphere

    NASA Technical Reports Server (NTRS)

    Salby, Murry L.

    1992-01-01

    The influence of transience of the planetary-wave field on the horizontal air motions and tracer distributions in the stratosphere was investigated in equivalent barotropic calculations. Two classes of transience are considered: a monochromatic traveling wave, representative of discrete components such as the 5- and 16-day waves, and a second-order stochastic process representative of broadband variability. The response to each of these forms of unsteady forcing is investigated in terms of the characteristic time scale of the transience. Results are presented, and the implications these results have on stratospheric behavior are discussed.

  3. Two-mode elliptical-core weighted fiber sensors for vibration analysis

    NASA Technical Reports Server (NTRS)

    Vengsarkar, Ashish M.; Murphy, Kent A.; Fogg, Brian R.; Miller, William V.; Greene, Jonathan A.; Claus, Richard O.

    1992-01-01

    Two-mode, elliptical-core optical fibers are demonstrated in weighted, distributed and selective vibration-mode-filtering applications. We show how appropriate placement of optical fibers on a vibrating structure can lead to vibration mode filtering. Selective vibration-mode suppression on the order of 10 dB has been obtained using tapered two-mode, circular-core fibers with tapering functions that match the second derivatives of the modes of vibration to be enhanced. We also demonstrate the use of chirped, two-mode gratings in fibers as spatial modal sensors that are equivalents of shaped piezoelectric sensors.

  4. Gauge fixing in higher-derivative gravity

    NASA Astrophysics Data System (ADS)

    Bartoli, A.; Julve, J.; Sánchez, E. J.

    1999-07-01

    Linearized 4-derivative gravity with a general gauge-fixing term is considered. By a Legendre transform and a suitable diagonalization procedure it is cast into a second-order equivalent form where the nature of the physical degrees of freedom, the gauge ghosts, the Weyl ghosts and the intriguing `third ghosts', characteristic to higher-derivative theories, is made explicit. The symmetries of the theory and the structure of the compensating Faddeev-Popov ghost sector exhibit non-trivial peculiarities. The unitarity breaking negative-norm Weyl ghosts, already present in the diff-invariant theory, are out of the reach of the ghost cancellation BRST mechanism.

  5. A solver for General Unilateral Polynomial Matrix Equation with Second-Order Matrices Over Prime Finite Fields

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-03-01

    The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.

  6. Beam-plasma instability in inhomogeneous magnetic field and second order cyclotron resonance effects

    NASA Astrophysics Data System (ADS)

    Trakhtengerts, V. Y.; Hobara, Y.; Demekhov, A. G.; Hayakawa, M.

    1999-03-01

    A new analytical approach to cyclotron instability of electron beams with sharp gradients in velocity space (step-like distribution function) is developed taking into account magnetic field inhomogeneity and nonstationary behavior of the electron beam velocity. Under these conditions, the conventional hydrodynamic instability of such beams is drastically modified and second order resonance effects become important. It is shown that the optimal conditions for the instability occur for nonstationary quasimonochromatic wavelets whose frequency changes in time. The theory developed permits one to estimate the wave amplification and spatio-temporal characteristics of these wavelets.

  7. Equivalence principle implications of modified gravity models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Nicolis, Alberto; Stubbs, Christopher W.

    2009-11-15

    Theories that attempt to explain the observed cosmic acceleration by modifying general relativity all introduce a new scalar degree of freedom that is active on large scales, but is screened on small scales to match experiments. We demonstrate that if such screening occurs via the chameleon mechanism, such as in f(R) theory, it is possible to have order unity violation of the equivalence principle, despite the absence of explicit violation in the microscopic action. Namely, extended objects such as galaxies or constituents thereof do not all fall at the same rate. The chameleon mechanism can screen the scalar charge formore » large objects but not for small ones (large/small is defined by the depth of the gravitational potential and is controlled by the scalar coupling). This leads to order one fluctuations in the ratio of the inertial mass to gravitational mass. We provide derivations in both Einstein and Jordan frames. In Jordan frame, it is no longer true that all objects move on geodesics; only unscreened ones, such as test particles, do. In contrast, if the scalar screening occurs via strong coupling, such as in the Dvali-Gabadadze-Porrati braneworld model, equivalence principle violation occurs at a much reduced level. We propose several observational tests of the chameleon mechanism: 1. small galaxies should accelerate faster than large galaxies, even in environments where dynamical friction is negligible; 2. voids defined by small galaxies would appear larger compared to standard expectations; 3. stars and diffuse gas in small galaxies should have different velocities, even if they are on the same orbits; 4. lensing and dynamical mass estimates should agree for large galaxies but disagree for small ones. We discuss possible pitfalls in some of these tests. The cleanest is the third one where the mass estimate from HI rotational velocity could exceed that from stars by 30% or more. To avoid blanket screening of all objects, the most promising place to look is in voids.« less

  8. Intensity-intensity correlations as a probe of interferences under conditions of noninterference in the intensity

    NASA Astrophysics Data System (ADS)

    Agarwal, G. S.; von Zanthier, J.; Skornia, C.; Walther, H.

    2002-05-01

    The different behavior of first-order interferences and second-order correlations are investigated for the case of two coherently excited atoms. For intensity measurements this problem is in many respects equivalent to Young's double-slit experiment and was investigated in an experiment by Eichmann et al. [Phys. Rev. Lett. 70, 2359 (1993)] and later analyzed in detail by Itano et al. [Phys. Rev. A 57, 4176 (1998)]. Our results show that in cases where the intensity interferences disappear the intensity-intensity correlations can display an interference pattern with a visibility of up to 100%. The contrast depends on the polarization selected for the detection and is independent of the strength of the driving field. The nonclassical nature of the calculated intensity-intensity correlations is also discussed.

  9. Geochemistry of Precambrian carbonates. V - Late Paleoproterozoic seawater

    NASA Technical Reports Server (NTRS)

    Veizer, Jan; Plumb, K. A.; Clayton, R. N.; Hinton, R. W.; Grotzinger, J. P.

    1992-01-01

    A study of mineralogy, chemistry, and isotopic composition of the Coronation Supergroup (about 1.9 Ga, NWT), Canada, and the McArthur Group (about 1.65 NT), Australia, is reported in order to obtain better constrained data for the first- and second-order variations in the isotopic composition of late Paleoproterozoic (1.9 +/- 0.2 Ga) seawater. Petrologically, both carbonate sequences are mostly dolostones. The McArthur population contains more abundant textural features that attest to the former presence of sulfates and halite, and the facies investigated represent ancient equivalents of modern evaporitic sabkhas and lacustrine playa lakes. It is suggested that dolomitization was an early diagenetic event and that the O-18 depletion of the Archean to late Paleoproterozoic carbonates is not an artifact of postdepositional alteration.

  10. Magnetocaloric effect in itinerant magnets around a metamagnetic transition

    NASA Astrophysics Data System (ADS)

    Bernhard, B. H.; Steinbach, J.

    2017-11-01

    The phase diagram and magnetocaloric effect in itinerant magnets is explored within the Stoner theory, which yields a reasonable description of the metamagnetic transition observed in various compounds. We obtain the phase diagram as a function of temperature and magnetic field, identifying the region of metastability around the first-order ferromagnetic transition. The impact on the magnetocaloric properties has been verified through the calculation of the isothermal entropy change ΔS , which is computed from two alternative methods based on specific heat or magnetization data. From the direct comparison between the two methods, we observe that the second one is strongly dependent on the process, and we explain under what conditions they become equivalent by using the Clausius-Clapeyron equation. We also discuss the effect of metastable states on the curves of ΔS . The evolution of the transition from first to second order is in good agreement with the phenomenological approach based on the Landau expansion. The results can be applied to different magnetic compounds such as RCo2, MnAs1-xSbx, and La(FexSi1-x)13.

  11. Benefits Estimation Model for Automated Vehicle Operations: Phase 2 Final Report

    DOT National Transportation Integrated Search

    2018-01-01

    Automated vehicles have the potential to bring about transformative safety, mobility, energy, and environmental benefits to the surface transportation system. They are also being introduced into a complex transportation system, where second-order imp...

  12. Estimation of groundwater recharge parameters by time series analysis

    USGS Publications Warehouse

    Naff, Richard L.; Gutjahr, Allan L.

    1983-01-01

    A model is proposed that relates water level fluctuations in a Dupuit aquifer to effective precipitaton at the top of the unsaturated zone. Effective precipitation, defined herein as that portion of precipitation which becomes recharge, is related to precipitation measured in a nearby gage by a two-parameter function. A second-order stationary assumption is used to connect the spectra of effective precipitation and water level fluctuations. Measured precipitation is assumed to be Gaussian, in order to develop a transfer function that relates the spectra of measured and effective precipitation. A nonlinear least squares technique is proposed for estimating parameters of the effective-precipitation function. Although sensitivity analyses indicate difficulties that may be encountered in the estimation procedure, the methods developed did yield convergent estimates for two case studies.

  13. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-07

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSiP, HSiAs, HGeN, HGeP, HGeAs); and (v) H2 (+) single bond with 1 electron.

  14. Quantum criticality of a spin-1 XY model with easy-plane single-ion anisotropy via a two-time Green function approach avoiding the Anderson-Callen decoupling

    NASA Astrophysics Data System (ADS)

    Mercaldo, M. T.; Rabuffo, I.; De Cesare, L.; Caramico D'Auria, A.

    2016-04-01

    In this work we study the quantum phase transition, the phase diagram and the quantum criticality induced by the easy-plane single-ion anisotropy in a d-dimensional quantum spin-1 XY model in absence of an external longitudinal magnetic field. We employ the two-time Green function method by avoiding the Anderson-Callen decoupling of spin operators at the same sites which is of doubtful accuracy. Following the original Devlin procedure we treat exactly the higher order single-site anisotropy Green functions and use Tyablikov-like decouplings for the exchange higher order ones. The related self-consistent equations appear suitable for an analysis of the thermodynamic properties at and around second order phase transition points. Remarkably, the equivalence between the microscopic spin model and the continuous O(2) -vector model with transverse-Ising model (TIM)-like dynamics, characterized by a dynamic critical exponent z=1, emerges at low temperatures close to the quantum critical point with the single-ion anisotropy parameter D as the non-thermal control parameter. The zero-temperature critic anisotropy parameter Dc is obtained for dimensionalities d > 1 as a function of the microscopic exchange coupling parameter and the related numerical data for different lattices are found to be in reasonable agreement with those obtained by means of alternative analytical and numerical methods. For d > 2, and in particular for d=3, we determine the finite-temperature critical line ending in the quantum critical point and the related TIM-like shift exponent, consistently with recent renormalization group predictions. The main crossover lines between different asymptotic regimes around the quantum critical point are also estimated providing a global phase diagram and a quantum criticality very similar to the conventional ones.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.; Bank, J.; Wan, Y. H.

    The total inertia stored in all rotating masses that are connected to power systems, such as synchronous generations and induction motors, is an essential force that keeps the system stable after disturbances. To ensure bulk power system stability, there is a need to estimate the equivalent inertia available from a renewable generation plant. An equivalent inertia constant analogous to that of conventional rotating machines can be used to provide a readily understandable metric. This paper explores a method that utilizes synchrophasor measurements to estimate the equivalent inertia that a wind plant provides to the system.

  16. Prostate cancer risk prediction based on complete prostate cancer family history

    PubMed Central

    Albright, Frederick; Stephenson, Robert A; Agarwal, Neeraj; Teerlink, Craig C; Lowrance, William T; Farnham, James M; Albright, Lisa A Cannon

    2015-01-01

    Background Prostate cancer (PC) relative risks (RRs) are typically estimated based on status of close relatives or presence of any affected relatives. This study provides RR estimates using extensive and specific PC family history. Methods A retrospective population-based study was undertaken to estimate RRs for PC based on complete family history of PC. A total of 635,443 males, all with ancestral genealogy data, were analyzed. RRs for PC were determined based upon PC rates estimated from males with no PC family history (without PC in first, second, or third degree relatives). RRs were determined for a variety of constellations, for example, number of first through third degree relatives; named (grandfather, father, uncle, cousins, brothers); maternal, paternal relationships, and age of onset. Results In the 635,443 males analyzed, 18,105 had PC. First-degree RRs ranged from 2.46 (=1 first-degree relative affected, CI = 2.39–2.53) to 7.65 (=4 first-degree relatives affected, CI = 6.28–9.23). Second-degree RRs for probands with 0 affected first-degree relatives ranged from 1.51 (≥1 second-degree relative affected, CI = 1.47–1.56) to 3.09 (≥5 second-degree relatives affected, CI = 2.32–4.03). Third-degree RRs with 0 affected first- and 0 affected second-degree relatives ranged from 1.15 (≥1 affected third-degree relative, CI = 1.12–1.19) to 1.50 (≥5 affected third-degree relatives, CI = 1.35–1.66). RRs based on age at diagnosis were higher for earlier age at diagnoses; for example, RR = 5.54 for ≥1 first-degree relative diagnosed before age 50 years (CI = 1.12–1.19) and RR = 1.78 for >1 second-degree relative diagnosed before age 50 years, CI = 1.33, 2.33. RRs for equivalent maternal versus paternal family history were not significantly different. Conclusions A more complete PC family history using close and distant relatives and age at diagnosis results in a wider range of estimates of individual RR that are potentially more accurate than RRs estimated from summary family history. The presence of PC in second- and even third-degree relatives contributes significantly to risk. Maternal family history is just as significant as paternal family history. PC RRs based on a proband's complete constellation of affected relatives will allow patients and care providers to make more informed screening, monitoring, and treatment decisions. Prostate 75:390–398, 2015. © 2014 Wiley Periodicals, Inc. PMID:25408531

  17. The Equivalence between (AB)[dagger] = B[dagger]A[dagger] and Other Mixed-Type Reverse-Order Laws

    ERIC Educational Resources Information Center

    Tian, Yongge

    2006-01-01

    The standard reverse-order law for the Moore-Penrose inverse of a matrix product is (AB)[dagger] = B[dagger]A[dagger]. The purpose of this article is to give a set of equivalences of this reverse-order law and other mixed-type reverse-order laws for the Moore-Penrose inverse of matrix products.

  18. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  19. Music Therapy: A Career in Music Therapy

    MedlinePlus

    ... combination with doctoral study in related areas. Degree Equivalent Training in Music Therapy P ersonal qualifications include ... the student completes only the coursework necessary for equivalent music therapy training without necessarily earning a second ...

  20. A Comparison between Oceanographic Parameters and Seafloor Pressures; Measured, Theoretical and Modelled, and Terrestrial Seismic Data

    NASA Astrophysics Data System (ADS)

    Donne, Sarah; Bean, Christopher; Craig, David; Dias, Frederic; Christodoulides, Paul

    2016-04-01

    Microseisms are continuous seismic vibrations which propagate mainly as surface Rayleigh and Love waves. They are generated by the Earth's oceans and there are two main types; primary and secondary microseisms. Primary microseisms are generated through the interaction of travelling surface gravity ocean waves with the seafloor in shallow waters relative to the wavelength of the ocean wave. Secondary microseisms, on the other hand are generated when two opposing wave trains interact and a non-linear second order effect produces a pressure fluctuation which is depth independent. The conditions necessary to produce secondary microseisms are presented in Longuet-Higgins (1950) through the interaction of two travelling waves with the same wave period and which interact at an angle of 180 degrees. Equivalent surface pressure density (p2l) is modelled using the numerical ocean wave model Wavewatch III and this term is considered as the microseism source term. This work presents an investigation of the theoretical second order pressures generated through the interaction of travelling waves with varying wave amplitude, period and angle of incidence. Predicted seafloor pressures calculated off the Southwest coast of Ireland are compared with terrestrially recorded microseism records, measured seafloor pressures and oceanographic parameters. The work presented in this study suggests that a broad set of sea states can generate second order seafloor pressures that are consistent with seafloor pressure measurements. Local seismic arrays throughout Ireland allow us to investigate the temporal covariance of these seafloor pressures with microseism source locations.

  1. Time reversibility and nonequilibrium thermodynamics of second-order stochastic processes.

    PubMed

    Ge, Hao

    2014-02-01

    Nonequilibrium thermodynamics of a general second-order stochastic system is investigated. We prove that at steady state, under inversion of velocities, the condition of time reversibility over the phase space is equivalent to the antisymmetry of spatial flux and the symmetry of velocity flux. Then we show that the condition of time reversibility alone cannot always guarantee the Maxwell-Boltzmann distribution. Comparing the two conditions together, we find that the frictional force naturally emerges as the unique odd term of the total force at thermodynamic equilibrium, and is followed by the Einstein relation. The two conditions respectively correspond to two previously reported different entropy production rates. In the case where the external force is only position dependent, the two entropy production rates become one. We prove that such an entropy production rate can be decomposed into two non-negative terms, expressed respectively by the conditional mean and variance of the thermodynamic force associated with the irreversible velocity flux at any given spatial coordinate. In the small inertia limit, the former term becomes the entropy production rate of the corresponding overdamped dynamics, while the anomalous entropy production rate originates from the latter term. Furthermore, regarding the connection between the first law and second law, we find that in the steady state of such a limit, the anomalous entropy production rate is also the leading order of the Boltzmann-factor weighted difference between the spatial heat dissipation densities of the underdamped and overdamped dynamics, while their unweighted difference always tends to vanish.

  2. High-rate real-time GPS network at Parkfield: Utility for detecting fault slip and seismic displacements

    USGS Publications Warehouse

    Langbein, J.; Bock, Y.

    2004-01-01

    A network of 13 continuous GPS stations near Parkfield, California has been converted from 30 second to 1 second sampling with positions of the stations estimated in real-time relative to a master station. Most stations are near the trace of the San Andreas fault, which exhibits creep. The noise spectra of the instantaneous 1 Hz positions show flicker noise at high frequencies and change to frequency independence at low frequencies; the change in character occurs between 6 to 8 hours. Our analysis indicates that 1-second sampled GPS can estimate horizontal displacements of order 6 mm at the 99% confidence level from a few seconds to a few hours. High frequency GPS can augment existing measurements in capturing large creep events and postseismic slip that would exceed the range of existing creepmeters, and can detect large seismic displacements. Copyright 2004 by the American Geophysical Union.

  3. Proton exchange membrane fuel cell model for aging predictions: Simulated equivalent active surface area loss and comparisons with durability tests

    NASA Astrophysics Data System (ADS)

    Robin, C.; Gérard, M.; Quinaud, M.; d'Arbigny, J.; Bultel, Y.

    2016-09-01

    The prediction of Proton Exchange Membrane Fuel Cell (PEMFC) lifetime is one of the major challenges to optimize both material properties and dynamic control of the fuel cell system. In this study, by a multiscale modeling approach, a mechanistic catalyst dissolution model is coupled to a dynamic PEMFC cell model to predict the performance loss of the PEMFC. Results are compared to two 2000-h experimental aging tests. More precisely, an original approach is introduced to estimate the loss of an equivalent active surface area during an aging test. Indeed, when the computed Electrochemical Catalyst Surface Area profile is fitted on the experimental measures from Cyclic Voltammetry, the computed performance loss of the PEMFC is underestimated. To be able to predict the performance loss measured by polarization curves during the aging test, an equivalent active surface area is obtained by a model inversion. This methodology enables to successfully find back the experimental cell voltage decay during time. The model parameters are fitted from the polarization curves so that they include the global degradation. Moreover, the model captures the aging heterogeneities along the surface of the cell observed experimentally. Finally, a second 2000-h durability test in dynamic operating conditions validates the approach.

  4. Spectral combination of spherical gravitational curvature boundary-value problems

    NASA Astrophysics Data System (ADS)

    PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel

    2018-04-01

    Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.

  5. Levels and congener pattern of polychlorinated biphenyls in the blubber of the Mediterranean bottlenose dolphins Tursiops truncatus.

    PubMed

    Storelli, M M; Marcotrigiano, G O

    2003-01-01

    Isomer specific concentrations of individual polychlorinated biphenyls (PCBs) including toxic non-ortho (IUPAC 77, 126, 169) and mono-ortho (105, 118, 156) coplanar congeners were determined in the blubber of nine bottlenose dolphins (Tursiops truncatus) stranded along the Eastern Italian coast. The total PCB concentrations ranged from 3534 to 24375 ng/g wet wt. The PCB profile was dominated by congeners 138 and 153 collectively accounting for 55% of the total PCB concentrations. Among the most toxic congeners the order of abundance was 126>169>77. The mean total 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) equivalent of six coplanar PCBs in the blubber of bottlenose dolphins was 45596 pg/g. Non-ortho congeners contributed greater to the 2,3,7,8-TCDD toxic equivalents than mono-ortho members. Particularly, PCB 126 was the major contributor to the estimated toxic potency of PCBs in dolphins.

  6. Estimating linear-nonlinear models using Rényi divergences

    PubMed Central

    Kouh, Minjoon; Sharpee, Tatyana O.

    2009-01-01

    This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981

  7. Estimating linear-nonlinear models using Renyi divergences.

    PubMed

    Kouh, Minjoon; Sharpee, Tatyana O

    2009-01-01

    This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.

  8. Behavioral inference of diving metabolic rate in free-ranging leatherback turtles.

    PubMed

    Bradshaw, Corey J A; McMahon, Clive R; Hays, Graeme C

    2007-01-01

    Good estimates of metabolic rate in free-ranging animals are essential for understanding behavior, distribution, and abundance. For the critically endangered leatherback turtle (Dermochelys coriacea), one of the world's largest reptiles, there has been a long-standing debate over whether this species demonstrates any metabolic endothermy. In short, do leatherbacks have a purely ectothermic reptilian metabolic rate or one that is elevated as a result of regional endothermy? Recent measurements have provided the first estimates of field metabolic rate (FMR) in leatherback turtles using doubly labeled water; however, the technique is prohibitively expensive and logistically difficult and produces estimates that are highly variable across individuals in this species. We therefore examined dive duration and depth data collected for nine free-swimming leatherback turtles over long periods (up to 431 d) to infer aerobic dive limits (ADLs) based on the asymptotic increase in maximum dive duration with depth. From this index of ADL and the known mass-specific oxygen storage capacity (To(2)) of leatherbacks, we inferred diving metabolic rate (DMR) as To2/ADL. We predicted that if leatherbacks conform to the purely ectothermic reptilian model of oxygen consumption, these inferred estimates of DMR should fall between predicted and measured values of reptilian resting and field metabolic rates, as well as being substantially lower than the FMR predicted for an endotherm of equivalent mass. Indeed, our behaviorally derived DMR estimates (mean=0.73+/-0.11 mL O(2) min(-1) kg(-1)) were 3.00+/-0.54 times the resting metabolic rate measured in unrestrained leatherbacks and 0.50+/-0.08 times the average FMR for a reptile of equivalent mass. These DMRs were also nearly one order of magnitude lower than the FMR predicted for an endotherm of equivalent mass. Thus, our findings lend support to the notion that diving leatherback turtles are indeed ectothermic and do not demonstrate elevated metabolic rates that might be expected due to regional endothermy. Their capacity to have a warm body core even in cold water therefore seems to derive from their large size, heat exchangers, thermal inertia, and insulating fat layers and not from an elevated metabolic rate.

  9. Classes of Split-Plot Response Surface Designs for Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2006-01-01

    When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.

  10. Architectures and economics for pervasive broadband satellite networks

    NASA Technical Reports Server (NTRS)

    Staelin, D. H.; Harvey, R. L.

    1979-01-01

    The size of a satellite network necessary to provide pervasive high-data-rate business communications is estimated, and one possible configuration is described which could interconnect most organizations in the United States. Within an order of magnitude, such a network might reasonably have a capacity equivalent to 10,000 simultaneous 3-Mbps channels, and rely primarily upon a cluster of approximately 3-5 satellites in a single orbital slot. Nominal prices for 3-6 Mbps video conference services might then be approximately $2000 monthly lease charge plus perhaps 70 cents per minute one way.

  11. 37 CFR 256.2 - Royalty fee for compulsory license for secondary transmission by cable systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gross receipts for the first distant signal equivalent; (3) .668 of 1 per centum of such gross receipts for each of the second, third and fourth distant signal equivalents; and (4) .314 of 1 per centum of such gross receipts for the fifth distant signal equivalent and each additional distant signal...

  12. The Spanish version of the Emotional Labour Scale (ELS): a validation study.

    PubMed

    Picardo, Juan M; López-Fernández, Consuelo; Hervás, María José Abellán

    2013-10-01

    To validate the Spanish version of the Emotional Labour Scale (ELS), an instrument widely used to understand how professionals working with people face emotional labor in their daily job. An observational, cross-sectional and multicenter survey was used. Nursing students and their clinical tutors (n=211) completed the self-reported ELS when the clinical practice period was over. First order and second order Confirmatory Factor Analyses (CFA) were estimated in order to test the factor structure of the scale. The results of the CFA confirm a factor structure of the scale with six first order factors (duration, frequency, intensity, variety, surface acting and deep acting) and two larger second order factors named Demands (duration, frequency, intensity and variety) and Acting (surface acting and deep acting) establishing the validity of the Spanish version of the ELS. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less

  14. Attitude Representations for Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2001-01-01

    The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.

  15. Quantifying the individual-level association between income and mortality risk in the United States using the National Longitudinal Mortality Study.

    PubMed

    Brodish, Paul Henry; Hakes, Jahn K

    2016-12-01

    Policy makers would benefit from being able to estimate the likely impact of potential interventions to reverse the effects of rapidly rising income inequality on mortality rates. Using multiple cohorts of the National Longitudinal Mortality Study (NLMS), we estimate the absolute income effect on premature mortality in the United States. A multivariate Poisson regression using the natural logarithm of equivilized household income establishes the magnitude of the absolute income effect on mortality. We calculate mortality rates for each income decile of the study sample and mortality rate ratios relative to the decile containing mean income. We then apply the estimated income effect to two kinds of hypothetical interventions that would redistribute income. The first lifts everyone with an equivalized household income at or below the U.S. poverty line (in 2000$) out of poverty, to the income category just above the poverty line. The second shifts each family's equivalized income by, in turn, 10%, 20%, 30%, or 40% toward the mean household income, equivalent to reducing the Gini coefficient by the same percentage in each scenario. We also assess mortality disparities of the hypothetical interventions using ratios of mortality rates of the ninth and second income deciles, and test sensitivity to the assumption of causality of income on mortality by halving the mortality effect per unit of equivalized household income. The estimated absolute income effect would produce a three to four percent reduction in mortality for a 10% reduction in the Gini coefficient. Larger mortality reductions result from larger reductions in the Gini, but with diminishing returns. Inequalities in estimated mortality rates are reduced by a larger percentage than overall estimated mortality rates under the same hypothetical redistributions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  17. Second-Order Active NLO Chromophores for DNA Based Electro-Optics Materials

    DTIC Science & Technology

    2010-09-21

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information

  18. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    NASA Astrophysics Data System (ADS)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  19. Temporal correlations in the Vicsek model with vectorial noise

    NASA Astrophysics Data System (ADS)

    Gulich, Damián; Baglietto, Gabriel; Rozenfeld, Alejandro F.

    2018-07-01

    We study the temporal correlations in the evolution of the order parameter ϕ(t) for the Vicsek model with vectorial noise by estimating its Hurst exponent H with detrended fluctuation analysis (DFA). We present results on this parameter as a function of noise amplitude η introduced in simulations. We also compare with well known order-disorder phase transition for that same noise range. We find that - regardless of detrending degree - H spikes at the known coexistence noise for phase transition, and that this is due to nonstationarities introduced by the transit of the system between two well defined states with lower exponents. We statistically support this claim by successfully synthesizing equivalent cases derived from a transformed fractional Brownian motion (TfBm).

  20. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  1. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  2. Adaptive approach for on-board impedance parameters and voltage estimation of lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Waag, Wladislaw; Sauer, Dirk Uwe

    2015-12-01

    Robust algorithms using reduced order equivalent circuit model (ECM) for an accurate and reliable estimation of battery states in various applications become more popular. In this study, a novel adaptive, self-learning heuristic algorithm for on-board impedance parameters and voltage estimation of lithium-ion batteries (LIBs) in electric vehicles is introduced. The presented approach is verified using LIBs with different composition of chemistries (NMC/C, NMC/LTO, LFP/C) at different aging states. An impedance-based reduced order ECM incorporating ohmic resistance and a combination of a constant phase element and a resistance (so-called ZARC-element) is employed. Existing algorithms in vehicles are much more limited in the complexity of the ECMs. The algorithm is validated using seven day real vehicle data with high temperature variation including very low temperatures (from -20 °C to +30 °C) at different Depth-of-Discharges (DoDs). Two possibilities to approximate both ZARC-elements with finite number of RC-elements on-board are shown and the results of the voltage estimation are compared. Moreover, the current dependence of the charge-transfer resistance is considered by employing Butler-Volmer equation. Achieved results indicate that both models yield almost the same grade of accuracy.

  3. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  4. Design optimization for a wearable, gamma-ray and neutron sensitive, detector array with directionality estimation

    NASA Astrophysics Data System (ADS)

    Ayaz-Maierhafer, Birsen; Britt, Carl G.; August, Andrew J.; Qi, Hairong; Seifert, Carolyn E.; Hayward, Jason P.

    2017-10-01

    In this study, we report on a constrained optimization and tradeoff study of a hybrid, wearable detector array having directional sensing based upon gamma-ray occlusion. One resulting design uses CLYC detectors while the second feasibility design involves the coupling of gamma-ray-sensitive CsI scintillators and a rubber LiCaAlF6 (LiCAF) neutron detector. The detector systems' responses were investigated through simulation as a function of angle in a two-dimensional plane. The expected total counts, peak-to-total ratio, directionality performance, and detection of 40 K for accurate gain stabilization were considered in the optimization. Source directionality estimation was investigated using Bayesian algorithms. Gamma-ray energies of 122 keV, 662 keV, and 1332 keV were considered. The equivalent neutron capture response compared with 3 He was also investigated for both designs.

  5. Turbulent vertical diffusivity in the sub-tropical stratosphere

    NASA Astrophysics Data System (ADS)

    Pisso, I.; Legras, B.

    2008-02-01

    Vertical (cross-isentropic) mixing is produced by small-scale turbulent processes which are still poorly understood and paramaterized in numerical models. In this work we provide estimates of local equivalent diffusion in the lower stratosphere by comparing balloon borne high-resolution measurements of chemical tracers with reconstructed mixing ratio from large ensembles of random Lagrangian backward trajectories using European Centre for Medium-range Weather Forecasts analysed winds and a chemistry-transport model (REPROBUS). We focus on a case study in subtropical latitudes using data from HIBISCUS campaign. An upper bound on the vertical diffusivity is found in this case study to be of the order of 0.5 m2 s-1 in the subtropical region, which is larger than the estimates at higher latitudes. The relation between diffusion and dispersion is studied by estimating Lyapunov exponents and studying their variation according to the presence of active dynamical structures.

  6. Generic Schemes for Single-Molecule Kinetics. 2: Information Content of the Poisson Indicator.

    PubMed

    Avila, Thomas R; Piephoff, D Evan; Cao, Jianshu

    2017-08-24

    Recently, we described a pathway analysis technique (paper 1) for analyzing generic schemes for single-molecule kinetics based upon the first-passage time distribution. Here, we employ this method to derive expressions for the Poisson indicator, a normalized measure of stochastic variation (essentially equivalent to the Fano factor and Mandel's Q parameter), for various renewal (i.e., memoryless) enzymatic reactions. We examine its dependence on substrate concentration, without assuming all steps follow Poissonian kinetics. Based upon fitting to the functional forms of the first two waiting time moments, we show that, to second order, the non-Poissonian kinetics are generally underdetermined but can be specified in certain scenarios. For an enzymatic reaction with an arbitrary intermediate topology, we identify a generic minimum of the Poisson indicator as a function of substrate concentration, which can be used to tune substrate concentration to the stochastic fluctuations and to estimate the largest number of underlying consecutive links in a turnover cycle. We identify a local maximum of the Poisson indicator (with respect to substrate concentration) for a renewal process as a signature of competitive binding, either between a substrate and an inhibitor or between multiple substrates. Our analysis explores the rich connections between Poisson indicator measurements and microscopic kinetic mechanisms.

  7. A research on snow distribution in mountainous area using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Nishihara, T.; Tanise, A.

    2015-12-01

    In snowy cold regions, the snowmelt water stored in dams in early spring meets the water demand for the summer season. Thus, snowmelt water serves as an important water resource. However, snowmelt water also can cause snowmelt floods. Therefore, it's necessary to estimate snow water equivalent in a dam basin as accurately as possible. For this reason, the dam operation offices in Hokkaido, Japan conduct snow surveys every March to estimate snow water equivalent in the dam basin. In estimating, we generally apply a relationship between elevation and snow water equivalent. But above the forest line, snow surveys are generally conducted along ridges due to the risk of avalanches or other hazards. As a result, snow water equivalent above the forest line is significantly underestimated. In this study, we conducted airborne laser scanning to measure snow depth in the high elevation area including above the forest line twice in the same target area (in 2012 and 2015) and analyzed the relationships of snow depth above the forest line and some indicators of terrain. Our target area was the Chubetsu dam basin. It's located in central Hokkaido, a high elevation area in a mountainous region. Hokkaido is a northernmost island of Japan. Therefore it's a cold and snowy region. The target range for airborne laser scanning was 10km2. About 60% of the target range was above the forest line. First, we analyzed the relationship between elevation and snow depth. Below the forest line, the snow depth increased linearly with elevation increase. On the other hand, above the forest line, the snow depth varied greatly. Second, we analyzed the relationship between overground-openness and snow depth above the forest line. Overground-openness is an indicator quantifying how far a target point is above or below the surrounding surface. As a result, a simple relationship was clarified. Snow depth decreased linearly as overground-openness increases. This means that areas with heavy snow cover are distributed in valleys and that of light cover are on ridges. Lastly we compared the result of 2012 and that of 2015. The same characteristic of snow depth, above mentioned, was found. However, regression coefficients of linear equations were different according to the weather conditions of each year.

  8. Higher-order gravity and the classical equivalence principle

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Herdy, Wallace

    2017-11-01

    As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.

  9. 40 CFR 403.6 - National pretreatment standards: Categorical standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Control Authority calculating equivalent mass-per-day limitations under paragraph (c)(2) of this section... production shall be estimated using projected production. (4) A Control Authority calculating equivalent... Control Authority convert the limits to equivalent mass limits. The determination to convert concentration...

  10. 40 CFR 403.6 - National pretreatment standards: Categorical standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Control Authority calculating equivalent mass-per-day limitations under paragraph (c)(2) of this section... production shall be estimated using projected production. (4) A Control Authority calculating equivalent... Control Authority convert the limits to equivalent mass limits. The determination to convert concentration...

  11. 40 CFR 403.6 - National pretreatment standards: Categorical standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Control Authority calculating equivalent mass-per-day limitations under paragraph (c)(2) of this section... production shall be estimated using projected production. (4) A Control Authority calculating equivalent... Control Authority convert the limits to equivalent mass limits. The determination to convert concentration...

  12. 40 CFR 403.6 - National pretreatment standards: Categorical standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Control Authority calculating equivalent mass-per-day limitations under paragraph (c)(2) of this section... production shall be estimated using projected production. (4) A Control Authority calculating equivalent... Control Authority convert the limits to equivalent mass limits. The determination to convert concentration...

  13. 40 CFR 403.6 - National pretreatment standards: Categorical standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Control Authority calculating equivalent mass-per-day limitations under paragraph (c)(2) of this section... production shall be estimated using projected production. (4) A Control Authority calculating equivalent... Control Authority convert the limits to equivalent mass limits. The determination to convert concentration...

  14. Method to control residual stress in a film structure and a system thereof

    DOEpatents

    Parthum, Sr., Michael J.

    2008-12-30

    A method for controlling residual stress in a structure in a MEMS device and a structure thereof includes selecting a total thickness and an overall equivalent stress for the structure. A thickness for each of at least one set of alternating first and second layers is determined to control an internal stress with respect to a neutral axis for each of the at least alternating first and second layers and to form the structure based on the selected total thickness and the selected overall equivalent stress. Each of the at least alternating first and second layers is deposited to the determined thickness for each of the at least alternating first and second layers to form the structure.

  15. Estimations of expectedness and potential surprise in possibility theory

    NASA Technical Reports Server (NTRS)

    Prade, Henri; Yager, Ronald R.

    1992-01-01

    This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined.

  16. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  17. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  18. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  19. Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR

    USGS Publications Warehouse

    Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.

    2015-01-01

    Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.

  20. Time-varying higher order spectra

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; O'Shea, Peter

    1991-12-01

    A general solution for the problem of time-frequency signal representation of nonlinear FM signals is provided, based on a generalization of the Wigner-Ville distribution. The Wigner- Ville distribution (WVD) is a second order time-frequency representation. That is, it is able to give ideal energy concentration for quadratic phase signals and its ensemble average is a second order time-varying spectrum. The same holds for Cohen's class of time-frequency distributions, which are smoothed versions of the WVD. The WVD may be extended so as to achieve ideal energy concentration for higher order phase laws, and such that the expectation is a time-varying higher order spectrum. The usefulness of these generalized Wigner-Ville distributions (GWVD) is twofold. Firstly, because they achieve ideal energy concentration for polynomial phase signals, they may be used for optimal instantaneous frequency estimation. Second, they are useful for discriminating between nonstationary processes of differing higher order moments. In the same way that the WVD is generalized, we generalize Cohen's class of TFDs by defining a class of generalized time-frequency distributions (GTFDs) obtained by a two dimensional smoothing of the GWVD. Another results derived from this approach is a method based on higher order spectra which allows the separation of cross-terms and auto- terms in the WVD.

  1. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  2. Mode-based equivalent multi-degree-of-freedom system for one-dimensional viscoelastic response analysis of layered soil deposit

    NASA Astrophysics Data System (ADS)

    Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong

    2018-01-01

    Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.

  3. Widespread Amazon forest tree mortality from a single cross-basin squall line event

    NASA Astrophysics Data System (ADS)

    Negrón-Juárez, Robinson I.; Chambers, Jeffrey Q.; Guimaraes, Giuliano; Zeng, Hongcheng; Raupp, Carlos F. M.; Marra, Daniel M.; Ribeiro, Gabriel H. P. M.; Saatchi, Sassan S.; Nelson, Bruce W.; Higuchi, Niro

    2010-08-01

    Climate change is expected to increase the intensity of extreme precipitation events in Amazonia that in turn might produce more forest blowdowns associated with convective storms. Yet quantitative tree mortality associated with convective storms has never been reported across Amazonia, representing an important additional source of carbon to the atmosphere. Here we demonstrate that a single squall line (aligned cluster of convective storm cells) propagating across Amazonia in January, 2005, caused widespread forest tree mortality and may have contributed to the elevated mortality observed that year. Forest plot data demonstrated that the same year represented the second highest mortality rate over a 15-year annual monitoring interval. Over the Manaus region, disturbed forest patches generated by the squall followed a power-law distribution (scaling exponent α = 1.48) and produced a mortality of 0.3-0.5 million trees, equivalent to 30% of the observed annual deforestation reported in 2005 over the same area. Basin-wide, potential tree mortality from this one event was estimated at 542 ± 121 million trees, equivalent to 23% of the mean annual biomass accumulation estimated for these forests. Our results highlight the vulnerability of Amazon trees to wind-driven mortality associated with convective storms. Storm intensity is expected to increase with a warming climate, which would result in additional tree mortality and carbon release to the atmosphere, with the potential to further warm the climate system.

  4. Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining

    NASA Astrophysics Data System (ADS)

    Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing

    2018-04-01

    Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.

  5. Scanning Tunneling Microscopic Characterization of an Engineered Organic Molecule

    DTIC Science & Technology

    2011-08-01

    attachment and wide-band MCT detector , was used. Figure 3 shows the spectra obtained for SAM of PMNBT (top), which was compared to raw crystal PMNBT...averaged in order to reduce random noise , especially in the high bias region. Figure 4d shows the average second-order STM I-V curves of each molecule...done to avoid the low signal-to- noise ratio regime of the STM (18). Our estimated value of go for dDT is about two orders of magnitude smaller than

  6. Delay-and-sum beamforming for direction of arrival estimation applied to gunshot acoustics

    NASA Astrophysics Data System (ADS)

    Ramos, António L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald

    2011-06-01

    Sniper positioning systems described in the literature use a two-step algorithm to estimate the sniper's location. First, the shockwave and the muzzle blast acoustic signatures must be detected and recognized, followed by an estimation of their respective direction-of-arrival (DOA). Second, the actual sniper's position is calculated based on the estimated DOA via an iterative algorithm that varies from system to system. The overall performance of such a system, however, is highly compromised when the first step is not carried out successfully. Currently available systems rely on a simple calculation of differences of time-of-arrival to estimate angles-of-arrival. This approach, however, lacks robustness by not taking full advantage of the array of sensors. This paper shows how the delay-and-sum beamforming technique can be applied to estimate the DOA for both the shockwave and the muzzle blast. The method has the twofold advantage of 1) adding an array gain of 10 logM, i.e., an increased SNR of 6 dB for a 4-microphone array, which is equivalent to doubling the detection range assuming free-field propagation; and 2) offering improved robustness in handling single- and multi-shots events as well as reflections by taking advantage of the spatial filtering capability.

  7. 7 CFR 1001.54 - Equivalent price.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Equivalent price. 1001.54 Section 1001.54 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Milk), DEPARTMENT OF AGRICULTURE MILK IN THE NORTHEAST MARKETING AREA Order Regulating...

  8. Equivalence of internal and external mixture schemes of single scattering properties in vector radiative transfer

    PubMed Central

    Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.

    2018-01-01

    Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543

  9. Ultra-wideband microwave photonic filter with a high Q-factor using a semiconductor optical amplifier.

    PubMed

    Chen, Han

    2017-04-01

    An ultra-wideband microwave photonic filter (MPF) with a high quality (Q)-factor based on the birefringence effects in a semiconductor optical amplifier (SOA) is presented, and the theoretical fundamentals of the design are explained. The proposed MPF along orthogonal polarization in an active loop operates at up to a Ku-band and provides a tunable free spectral range from 15.44 to 19.44 GHz by controlling the SOA injection current. A prototype of the equivalent second-order infinite impulse response filter with a Q-factor over 6300 and a rejection ration exceeding 41 dB is experimentally demonstrated.

  10. Visual enhancements in pick-and-place tasks: Human operators controlling a simulated cylindrical manipulator

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Tendick, Frank; Stark, Lawrence

    1989-01-01

    A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements.

  11. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  12. Experimental verification of low sonic boom configuration

    NASA Technical Reports Server (NTRS)

    Ferri, A.; Wang, H. H.; Sorensen, H.

    1972-01-01

    A configuration designed to produce near field signature has been tested at M = 2.71 and the results are analyzed, by taking in account three-dimensional and second order effects. The configuration has an equivalent total area distribution that corresponds to an airplane flying at 60,000 ft. having a weight of 460,000 lbs, and 300 ft. length. A maximum overpressure of 0.95 lb/square foot has been obtained experimentally. The experimental results agree well with the analysis. The investigation indicates that the three-dimensional effects are very important when the measurements in wind tunnels are taken at small distances from the airplane.

  13. MICROBIAL TRANSFORMATION RATE CONSTANTS OF STRUCTURALLY DIVERSE MAN-MADE CHEMICALS

    EPA Science Inventory

    To assist in estimating microbially mediated transformation rates of man-made chemicals from their chemical structures, all second order rate constants that have been measured under conditions that make the values comparable have been extracted from the literature and combined wi...

  14. ENGINEERING ECONOMIC ANALYSIS OF A PROGRAM FOR ARTIFICIAL GROUNDWATER RECHARGE.

    USGS Publications Warehouse

    Reichard, Eric G.; Bredehoeft, John D.

    1984-01-01

    This study describes and demonstrates two alternate methods for evaluating the relative costs and benefits of artificial groundwater recharge using percolation ponds. The first analysis considers the benefits to be the reduction of pumping lifts and land subsidence; the second considers benefits as the alternative costs of a comparable surface delivery system. Example computations are carried out for an existing artificial recharge program in Santa Clara Valley in California. A computer groundwater model is used to estimate both the average long term and the drought period effects of artificial recharge in the study area. Results indicate that the costs of artificial recharge are considerably smaller than the alternative costs of an equivalent surface system. Refs.

  15. Regional Recovery of the Disturbing Gravitational Potential from Satellite Observations of First-, Second- and Third-order Radial Derivatives of the Disturbing Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Novak, P.; Pitonak, M.; Sprlak, M.

    2015-12-01

    Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of a third-order gravitational tensor are currently under investigation, e.g. the gravity-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite observations of first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008. Finally, this contribution also discusses merging a regional solution into a global field as a patchwork.

  16. Possibilities of the regional gravity field recovery from first-, second- and third-order radial derivatives of the disturbing gravitational potential measured on moving platforms

    NASA Astrophysics Data System (ADS)

    Pitonak, Martin; Sprlak, Michal; Novak, Pavel; Tenzer, Robert

    2016-04-01

    Recently realized gravity-dedicated satellite missions allow for measuring values of scalar, vectorial (Gravity Recovery And Climate Experiment - GRACE) and second-order tensorial (Gravity field and steady-state Ocean Circulation Explorer - GOCE) parameters of the Earth's gravitational potential. Theoretical aspects related to using moving sensors for measuring elements of the third-order gravitational tensor are currently under investigation, e.g., the gravity field-dedicated satellite mission OPTIMA (OPTical Interferometry for global Mass change detection from space) should measure third-order derivatives of the Earth's gravitational potential. This contribution investigates regional recovery of the disturbing gravitational potential on the Earth's surface from satellite and aerial observations of the first-, second- and third-order radial derivatives of the disturbing gravitational potential. Synthetic measurements along a satellite orbit at the altitude of 250 km and along an aircraft track at the altitude of 10 km are synthetized from the global gravitational model EGM2008 and polluted by the Gaussian noise. The process of downward continuation is stabilized by the Tikhonov regularization. Estimated values of the disturbing gravitational potential are compared with the same quantity synthesized directly from EGM2008.

  17. Estimating raw material equivalents on a macro-level: comparison of multi-regional input-output analysis and hybrid LCI-IO.

    PubMed

    Schoer, Karl; Wood, Richard; Arto, Iñaki; Weinzettel, Jan

    2013-12-17

    The mass of material consumed by a population has become a useful proxy for measuring environmental pressure. The "raw material equivalents" (RME) metric of material consumption addresses the issue of including the full supply chain (including imports) when calculating national or product level material impacts. The RME calculation suffers from data availability, however, as quantitative data on production practices along the full supply chain (in different regions) is required. Hence, the RME is currently being estimated by three main approaches: (1) assuming domestic technology in foreign economies, (2) utilizing region-specific life-cycle inventories (in a hybrid framework), and (3) utilizing multi-regional input-output (MRIO) analysis to explicitly cover all regions of the supply chain. While the first approach has been shown to give inaccurate results, this paper focuses on the benefits and costs of the latter two approaches. We analyze results from two key (MRIO and hybrid) projects modeling raw material equivalents, adjusting the models in a stepwise manner in order to quantify the effects of individual conceptual elements. We attempt to isolate the MRIO gap, which denotes the quantitative impact of calculating the RME of imports by an MRIO approach instead of the hybrid model, focusing on the RME of EU external trade imports. While, the models give quantitatively similar results, differences become more pronounced when tracking more detailed material flows. We assess the advantages and disadvantages of the two approaches and look forward to ways to further harmonize data and approaches.

  18. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  19. Space Radiation Organ Doses for Astronauts on Past and Future Missions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2007-01-01

    We review methods and data used for determining astronaut organ dose equivalents on past space missions including Apollo, Skylab, Space Shuttle, NASA-Mir, and International Space Station (ISS). Expectations for future lunar missions are also described. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra, or a related quantity, the lineal energy (y) spectra that is measured by a tissue equivalent proportional counter (TEPC). These data are used in conjunction with space radiation transport models to project organ specific doses used in cancer and other risk projection models. Biodosimetry data from Mir, STS, and ISS missions provide an alternative estimate of organ dose equivalents based on chromosome aberrations. The physical environments inside spacecraft are currently well understood with errors in organ dose projections estimated as less than plus or minus 15%, however understanding the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons for which there are no human data to estimate risks. The accuracy of projections of organ dose equivalents described here must be supplemented with research on the health risks of space exposure to properly assess crew safety for exploration missions.

  20. Effect of Intensity-Modulated Pelvic Radiotherapy on Second Cancer Risk in the Postoperative Treatment of Endometrial and Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwahlen, Daniel R.; Department of Radiation Oncology, University Hospital Zurich, Zurich; Ruben, Jeremy D.

    2009-06-01

    Purpose: To estimate and compare intensity-modulated radiotherapy (IMRT) with three-dimensional conformal radiotherapy (3DCRT) in terms of second cancer risk (SCR) for postoperative treatment of endometrial and cervical cancer. Methods and Materials: To estimate SCR, the organ equivalent dose concept with a linear-exponential, a plateau, and a linear dose-response model was applied to dose distributions, calculated in a planning computed tomography scan of a 68-year-old woman. Three plans were computed: four-field 18-MV 3DCRT and nine-field IMRT with 6- and 18-MV photons. SCR was estimated as a function of target dose (50.4 Gy/28 fractions) in organs of interest according to the Internationalmore » Commission on Radiological Protection Results: Cumulative SCR relative to 3DCRT was +6% (3% for a plateau model, -4% for a linear model) for 6-MV IMRT and +26% (25%, 4%) for the 18-MV IMRT plan. For an organ within the primary beam, SCR was +12% (0%, -12%) for 6-MV and +5% (-2%, -7%) for 18-MV IMRT. 18-MV IMRT increased SCR 6-7 times for organs away from the primary beam relative to 3DCRT and 6-MV IMRT. Skin SCR increased by 22-37% for 6-MV and 50-69% for 18-MV IMRT inasmuch as a larger volume of skin was exposed. Conclusion: Cancer risk after IMRT for cervical and endometrial cancer is dependent on treatment energy. 6-MV pelvic IMRT represents a safe alternative with respect to SCR relative to 3DCRT, independently of the dose-response model. 18-MV IMRT produces second neutrons that modestly increase the SCR.« less

  1. Estimating the Depth of the Navy Recruiting Market

    DTIC Science & Technology

    2016-09-01

    recommend that NRC make use of the Poisson regression model in order to determine high-yield ZIP codes for market depth. 14. SUBJECT...recommend that NRC make use of the Poisson regression model in order to determine high-yield ZIP codes for market depth. vi THIS PAGE INTENTIONALLY LEFT...DEPTH OF THE NAVY RECRUITING MARKET by Emilie M. Monaghan September 2016 Thesis Advisor: Lyn R. Whitaker Second Reader: Jonathan K. Alt

  2. Glycaemic and satiating properties of potato products.

    PubMed

    Leeman, M; Ostman, E; Björck, I

    2008-01-01

    To investigate glycaemic and satiating properties of potato products in healthy subjects using energy-equivalent or carbohydrate-equivalent test meals, respectively. Thirteen healthy subjects volunteered for the first study, and 14 for the second. The tests were performed at Applied Nutrition and Food Chemistry, Lund University, Sweden. EXPERIMENTAL DESIGN AND TEST MEALS: All meals were served as breakfast in random order after an overnight fast. Study 1 included four energy-equivalent (1000 kJ) meals of boiled potatoes, french fries, or mashed potatoes; the latter varying in portion size by use of different amounts of water. The available carbohydrate content varied between 32.5 and 50.3 g/portion. Capillary blood samples were collected during 240 min for analysis of glucose, and satiety was measured with a subjective rating scale. Study 2 included four carbohydrate-equivalent meals (50 g available carbohydrates) of french fries, boiled potatoes served with and without addition of oil, and white wheat bread (reference). The energy content varied between 963 and 1534 kJ/portion. Capillary blood samples were collected during 180 min for analysis of glucose, and satiety was measured using a subjective rating scale. Study 1: boiled potatoes induced higher subjective satiety than french fries when compared on an energy-equivalent basis. The french fries elicited the lowest early glycaemic response and was less satiating in the early postprandial phase (area under the curve (AUC) 0-45 min). No differences were found in glycaemic or satiety response between boiled or mashed potatoes. Study 2: french fries resulted in a significantly lower glycaemic response (glycaemic index (GI)=77) than boiled potatoes either with or without addition of oil (GI=131 and 111, respectively). No differences were found in subjective satiety response between the products served on carbohydrate equivalence. Boiled potatoes were more satiating than french fries on an energy-equivalent basis, the effect being most prominent in the early postprandial phase, whereas no difference in satiety could be seen on a carbohydrate-equivalent basis. The lowered GI for french fries, showing a typical prolonged low-GI profile, could not be explained by the fat content per se.

  3. The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krmar, M.; Kuzmanović, A.; Nikolić, D.

    2013-08-15

    Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less

  4. Characterization of measurements in quantum communication. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chan, V. W. S.

    1975-01-01

    A characterization of quantum measurements by operator valued measures is presented. The generalized measurements include simultaneous approximate measurement of noncommuting observables. This characterization is suitable for solving problems in quantum communication. Two realizations of such measurements are discussed. The first is by adjoining an apparatus to the system under observation and performing a measurement corresponding to a self-adjoint operator in the tensor-product Hilbert space of the system and apparatus spaces. The second realization is by performing, on the system alone, sequential measurements that correspond to self-adjoint operators, basing the choice of each measurement on the outcomes of previous measurements. Simultaneous generalized measurements are found to be equivalent to a single finer grain generalized measurement, and hence it is sufficient to consider the set of single measurements. An alternative characterization of generalized measurement is proposed. It is shown to be equivalent to the characterization by operator-values measures, but it is potentially more suitable for the treatment of estimation problems. Finally, a study of the interaction between the information-carrying system and a measurement apparatus provides clues for the physical realizations of abstractly characterized quantum measurements.

  5. Ignition in an Atomistic Model of Hydrogen Oxidation.

    PubMed

    Alaghemandi, Mohammad; Newcomb, Lucas B; Green, Jason R

    2017-03-02

    Hydrogen is a potential substitute for fossil fuels that would reduce the combustive emission of carbon dioxide. However, the low ignition energy needed to initiate oxidation imposes constraints on the efficiency and safety of hydrogen-based technologies. Microscopic details of the combustion processes, ephemeral transient species, and complex reaction networks are necessary to control and optimize the use of hydrogen as a commercial fuel. Here, we report estimates of the ignition time of hydrogen-oxygen mixtures over a wide range of equivalence ratios from extensive reactive molecular dynamics simulations. These data show that the shortest ignition time corresponds to a fuel-lean mixture with an equivalence ratio of 0.5, where the number of hydrogen and oxygen molecules in the initial mixture are identical, in good agreement with a recent chemical kinetic model. We find two signatures in the simulation data precede ignition at pressures above 200 MPa. First, there is a peak in hydrogen peroxide that signals ignition is imminent in about 100 ps. Second, we find a strong anticorrelation between the ignition time and the rate of energy dissipation, suggesting the role of thermal feedback in stimulating ignition.

  6. Optimal guidance law development for an advanced launch system

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Hodges, Dewey H.

    1990-01-01

    A regular perturbation analysis is presented. Closed-loop simulations were performed with a first order correction including all of the atmospheric terms. In addition, a method was developed for independently checking the accuracy of the analysis and the rather extensive programming required to implement the complete first order correction with all of the aerodynamic effects included. This amounted to developing an equivalent Hamiltonian computed from the first order analysis. A second order correction was also completed for the neglected spherical Earth and back-pressure effects. Finally, an analysis was begun on a method for dealing with control inequality constraints. The results on including higher order corrections do show some improvement for this application; however, it is not known at this stage if significant improvement will result when the aerodynamic forces are included. The weak formulation for solving optimal problems was extended in order to account for state inequality constraints. The formulation was tested on three example problems and numerical results were compared to the exact solutions. Development of a general purpose computational environment for the solution of a large class of optimal control problems is under way. An example, along with the necessary input and the output, is given.

  7. 78 FR 255 - Resumption of the Population Estimates Challenge Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-03

    ... governmental unit. In those instances where a non-functioning county-level government or statistical equivalent...) A non-functioning county or statistical equivalent means a sub- state entity that does not function... represents a non-functioning county or statistical equivalent, the governor will serve as the chief executive...

  8. Estimation of Supersonic Stage Separation Aerodynamics of Winged-Body Launch Vehicles Using Response Surface Methods

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2010-01-01

    Response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration at supersonic speeds in the NASA LaRC Unitary Plan Wind Tunnel. The Mach 3 staging was dominated by shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. The inference space was partitioned into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using central composite designs capable of fitting full second-order response functions. The underlying aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle were estimated using piecewise-continuous lower-order polynomial functions. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. Augmenting the central composite designs to full third-order using computer-generated D-optimality criteria was evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting lower-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.

  9. Digital scale converter

    DOEpatents

    Upton, Richard G.

    1978-01-01

    A digital scale converter is provided for binary coded decimal (BCD) conversion. The converter may be programmed to convert a BCD value of a first scale to the equivalent value of a second scale according to a known ratio. The value to be converted is loaded into a first BCD counter and counted down to zero while a second BCD counter registers counts from zero or an offset value depending upon the conversion. Programmable rate multipliers are used to generate pulses at selected rates to the counters for the proper conversion ratio. The value present in the second counter at the time the first counter is counted to the zero count is the equivalent value of the second scale. This value may be read out and displayed on a conventional seven-segment digital display.

  10. Investigation of PDMS based bi-layer elasticity via interpretation of apparent Young's modulus.

    PubMed

    Sarrazin, Baptiste; Brossard, Rémy; Guenoun, Patrick; Malloggi, Florent

    2016-02-21

    As the need of new methods for the investigation of thin films on various kinds of substrates becomes greater, a novel approach based on AFM nanoindentation is explored. Substrates of polydimethylsiloxane (PDMS) coated by a layer of hard material are probed with an AFM tip in order to obtain the force profile as a function of the indentation. The equivalent elasticity of those composite systems is interpreted using a new numerical approach, the Coated Half-Space Indentation Model of Elastic Response (CHIMER), in order to extract the thicknesses of the upper layer. Two kinds of coating are investigated. First, chitosan films of known thicknesses between 30 and 200 nm were probed in order to test the model. A second type of samples is produced by oxygen plasma oxidation of the PDMS substrate, which results in the growth of a relatively homogeneous oxide layer. The local nature of this protocol enables measurements at long oxidation time, where the apparition of cracks prevents other kinds of measurements.

  11. Computational Study of Near-limit Propagation of Detonation in Hydrogen-air Mixtures

    NASA Technical Reports Server (NTRS)

    Yungster, S.; Radhakrishnan, K.

    2002-01-01

    A computational investigation of the near-limit propagation of detonation in lean and rich hydrogen-air mixtures is presented. The calculations were carried out over an equivalence ratio range of 0.4 to 5.0, pressures ranging from 0.2 bar to 1.0 bar and ambient initial temperature. The computations involved solution of the one-dimensional Euler equations with detailed finite-rate chemistry. The numerical method is based on a second-order spatially accurate total-variation-diminishing (TVD) scheme, and a point implicit, first-order-accurate, time marching algorithm. The hydrogen-air combustion was modeled with a 9-species, 19-step reaction mechanism. A multi-level, dynamically adaptive grid was utilized in order to resolve the structure of the detonation. The results of the computations indicate that when hydrogen concentrations are reduced below certain levels, the detonation wave switches from a high-frequency, low amplitude oscillation mode to a low frequency mode exhibiting large fluctuations in the detonation wave speed; that is, a 'galloping' propagation mode is established.

  12. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  13. Community pharmacy and mail order cost and utilization for 90-day maintenance medication prescriptions.

    PubMed

    Khandelwal, Nikhil; Duncan, Ian; Rubinstein, Elan; Ahmed, Tamim; Pegus, Cheryl

    2012-04-01

    Pharmacy benefit management (PBM) companies promote mail order programs that typically dispense 90-day quantities of maintenance medications, marketing this feature as a key cost containment strategy to address plan sponsors' rising prescription drug expenditures. In recent years, community pharmacies have introduced 90-day programs that provide similar cost advantages, while allowing these prescriptions to be dispensed at the same pharmacies that patients frequent for 30-day quantities. To compare utilization rates and corresponding costs associated with obtaining 90-day prescriptions at community and mail order pharmacies for payers that offer equivalent benefits in different 90-day dispensing channels. We performed a retrospective, cross-sectional investigation using pharmacy claims and eligibility data from employer group clients of a large PBM between January 2008 and September 2010. We excluded the following client types: government, third-party administrators, schools, hospitals, 340B (federal drug pricing), employers in Puerto Rico, and miscellaneous clients for which the PBM provided billing services (e.g., the pharmacy's loyalty card program members). All employer groups in the sample offered 90-day community pharmacy and mail order dispensing and received benefits management services, such as formulary management and mail order pharmacy, from the PBM. We further limited the sample to employer groups that offered equivalent benefits for community pharmacy and mail order, defined as groups in which the mean and median copayments per claim for community and mail order pharmacy, by tier, differed by no more than 5%. Enrollees in the sample were required to have a minimum of 6 months of eligibility in each calendar year but were not required to have filled a prescription in any year. We evaluated pharmacy costs and utilization for a market basket of 14 frequently dispensed therapeutic classes of maintenance medications. The proportional share of claims for each therapeutic class in the mail order channel was used to weight the results for the community pharmacy channel. Using ordinary least squares regression models, we controlled for differences between channel users with respect to the following confounding factors: age, gender, presence or absence of each of the top 11 drug-inferred conditions (e.g., asthma/chronic obstructive pulmonary disease, cardiovascular disease), drug mix, and calendar year. We calculated estimated predicted means holding all covariates at their mean values. For both 90-day dispensing channels, we calculated number of 90-day claims per member per year (PMPY) and cost per pharmacy claim, with all claims counts adjusted to 30-day equivalents (i.e., number of 90-day claims × 3). Differences were compared using t-tests for statistical significance. Of 355 PBM clients prior to exclusions, 72 unique employers covering 644,071 unique members (range of approximately 100 to more than 100,000 members per employer) were included in the analysis. On an unadjusted basis, community pharmacies represented 80.8% of 90-day market basket claims (in 30-day equivalents: 3.97 claims PMPY vs. 0.95 in mail order) and 77.2% of total allowed charges. After adjustments for therapeutic group mix and patient characteristics, predicted mean pharmacy claim counts PMPY were 4.09 for community pharmacy compared with 0.85 for mail order (P  less than  0.001). Predicted mean allowed charges per claim for community and mail order pharmacies did not significantly differ ($49.03 vs. $50.04, respectively, P = 0.202). When offered maintenance medications through community and mail order pharmacies on a benefit-equivalent basis, commercially insured employees and their dependents utilized the community pharmacy channel more frequently by a margin of more than 4 to 1 in terms of claims PMPY. Overall allowed charges per claim for community and mail order pharmacy did not significantly differ.

  14. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    NASA Technical Reports Server (NTRS)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  15. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    PubMed

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  16. TEPC Response Functions

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Wilson, J. W.

    2003-01-01

    The tissue equivalent proportional counter had the purpose of providing the energy absorbed from a radiation field and an estimate of the corresponding linear energy transfer (LET) for evaluation of radiation quality to convert to dose equivalent. It was the recognition of the limitations in estimating LET which lead to a new approach to dosimetry, microdosimetry, and the corresponding emphasis on energy deposit in a small tissue volume as the driver of biological response with the defined quantity of lineal energy. In many circumstances, the average of the lineal energy and LET are closely related and has provided a basis for estimating dose equivalent. Still in many cases the lineal is poorly related to LET and brings into question the usefulness as a general purpose device. These relationships are examined in this paper.

  17. Initialization of a fractional order identification algorithm applied for Lithium-ion battery modeling in time domain

    NASA Astrophysics Data System (ADS)

    Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry

    2018-06-01

    This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.

  18. Identification and feedback control in structures with piezoceramic actuators

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.; Wang, Y.

    1992-01-01

    In this lecture we give fundamental well-posedness results for a variational formulation of a class of damped second order partial differential equations with unbounded input or control coefficients. Included as special cases in this class are structures with piezoceramic actuators. We consider approximation techniques leading to computational methods in the context of both parameter estimation and feedback control problems for these systems. Rigorous convergence results for parameter estimates and feedback gains are discussed.

  19. Comparative chronic toxicity of imidacloprid, clothianidin, and thiamethoxam to Chironomus dilutus and estimation of toxic equivalency factors.

    PubMed

    Cavallaro, Michael C; Morrissey, Christy A; Headley, John V; Peru, Kerry M; Liber, Karsten

    2017-02-01

    Nontarget aquatic insects are susceptible to chronic neonicotinoid insecticide exposure during the early stages of development from repeated runoff events and prolonged persistence of these chemicals. Investigations on the chronic toxicity of neonicotinoids to aquatic invertebrates have been limited to a few species and under different laboratory conditions that often preclude direct comparisons of the relative toxicity of different compounds. In the present study, full life-cycle toxicity tests using Chironomus dilutus were performed to compare the toxicity of 3 commonly used neonicotinoids: imidacloprid, clothianidin, and thiamethoxam. Test conditions followed a static-renewal exposure protocol in which lethal and sublethal endpoints were assessed on days 14 and 40. Reduced emergence success, advanced emergence timing, and male-biased sex ratios were sensitive responses to low-level neonicotinoid exposure. The 14-d median lethal concentrations for imidacloprid, clothianidin, and thiamethoxam were 1.52 μg/L, 2.41 μg/L, and 23.60 μg/L, respectively. The 40-d median effect concentrations (emergence) for imidacloprid, clothianidin, and thiamethoxam were 0.39 μg/L, 0.28 μg/L, and 4.13 μg/L, respectively. Toxic equivalence relative to imidacloprid was estimated through a 3-point response average of equivalencies calculated at 20%, 50%, and 90% lethal and effect concentrations. Relative to imidacloprid (toxic equivalency factor [TEF] = 1.0), chronic (lethality) 14-d TEFs for clothianidin and thiamethoxam were 1.05 and 0.14, respectively, and chronic (emergence inhibition) 40-d TEFs were 1.62 and 0.11, respectively. These population-relevant endpoints and TEFs suggest that imidacloprid and clothianidin exert comparable chronic toxicity to C. dilutus, whereas thiamethoxam induced comparable effects only at concentrations an order of magnitude higher. However, the authors caution that under field conditions, thiamethoxam readily degrades to clothianidin, thereby likely enhancing toxicity. Environ Toxicol Chem 2017;36:372-382. © 2016 SETAC. © 2016 SETAC.

  20. The Ionizing Radiation Environment on the Moon

    NASA Technical Reports Server (NTRS)

    Adams, J. H., Jr.; Bhattacharya, M.; Lin, Zi-Wei; Pendleton, G.

    2006-01-01

    The ionizing radiation environment on the moon that contributes to the radiation hazard for astronauts consists of galactic cosmic rays, solar energetic particles and albedo particles from the lunar surface. We will present calculations of the absorbed dose and the dose equivalent to various organs in this environment during quiet times and during large solar particle events. We will evaluate the contribution of solar particles other than protons and the contributions of the various forms of albedo. We will use the results to determine which particle fluxes must be known in order to estimate the radiation hazard.

  1. Two photon excitation of atomic oxygen

    NASA Technical Reports Server (NTRS)

    Pindzola, M. S.

    1977-01-01

    A standard perturbation expansion in the atom-radiation field interaction is used to calculate the two photon excitation cross section for 1s(2) 2s(2) 2p(4) p3 to 1s(2) 2s(2) 2p(3) (s4) 3p p3 transition in atomic oxygen. The summation over bound and continuum intermediate states is handled by solving the equivalent inhomogeneous differential equation. Exact summation results differ by a factor of 2 from a rough estimate obtained by limiting the intermediate state summation to one bound state. Higher order electron correlation effects are also examined.

  2. Estimation of m.w.e (meter water equivalent) depth of the salt mine of Slanic Prahova, Romania

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrica, B.; Margineanu, R.; Stoica, S.

    2010-11-24

    A new mobile detector was developed in IFIN-HH, Romania, for measuring muon flux at surface and in underground. The measurements have been performed in the salt mines of Slanic Prahova, Romania. The muon flux was determined for 2 different galleries of the Slanic mine at different depths. In order to test the stability of the method, also measurements of the muon flux at surface at different altitudes were performed. Based on the results, the depth of the 2 galleries was established at 610 and 790 m.w.e. respectively.

  3. A progress report on using bolometers cooled by adiabatic demagnetization refrigeration

    NASA Technical Reports Server (NTRS)

    Lesyna, L.; Roellig, T.; Savage, M.; Werner, Michael W.

    1989-01-01

    For sensitive detection of astronomical continuum radiation in the 200 micron to 3 mm wavelength range, bolometers are presently the detectors of choice. In order to approach the limits imposed by photon noise in a cryogenically cooled telescope in space, bolometers must be operated at temperatures near 0.1 K. Researchers report progress in building and using bolometers that operate at these temperatures. The most sensitive bolometer had an estimated noise equivalent power (NEP) of 7 x 10(exp 017) W Hz(exp -1/2). Researchers also briefly discuss the durability of paramagnetic salts used to cool the bolometers.

  4. Magnet management in electric machines

    DOEpatents

    Reddy, Patel Bhageerath; El-Refaie, Ayman Mohamed Fawzi; Huh, Kum Kang

    2017-03-21

    A magnet management method of controlling a ferrite-type permanent magnet electrical machine includes receiving and/or estimating the temperature permanent magnets; determining if that temperature is below a predetermined temperature; and if so, then: selectively heating the magnets in order to prevent demagnetization and/or derating the machine. A similar method provides for controlling magnetization level by analyzing flux or magnetization level. Controllers that employ various methods are disclosed. The present invention has been described in terms of specific embodiment(s), and it is recognized that equivalents, alternatives, and modifications, aside from those expressly stated, are possible and within the scope of the appending claims.

  5. Narrow Quasar Absorption Lines and the History of the Universe

    NASA Astrophysics Data System (ADS)

    Liebscher, Dierck-Ekkehard

    In order to get an estimation of the parameters of the cosmological model the statistics of narrow absorption lines in quasar spectra is evaluated. To this end a phenomenological model of the evolution of the corresponding absorbers in density, size, number and dimension is presented and compared with the observed evolution in the spectral density of the lines and their column density seen in the equivalent width. In spite of the wide range of possible models, the Einstein-deSitter model is shown to be unlikely because of the implied fast evolution in mass.

  6. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    PubMed

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.

  7. Estimation of energetic efficiency of heat supply in front of the aircraft at supersonic accelerated flight. Part 1. Mathematical models

    NASA Astrophysics Data System (ADS)

    Latypov, A. F.

    2008-12-01

    Fuel economy at boost trajectory of the aerospace plane was estimated during energy supply to the free stream. Initial and final flight velocities were specified. The model of a gliding flight above cold air in an infinite isobaric thermal wake was used. The fuel consumption rates were compared at optimal trajectory. The calculations were carried out using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was built in the first part of the paper to estimate the ramjet thrust and specific impulse. A quadratic dependence on aerodynamic lift was used to estimate the aerodynamic drag of aircraft. The energy for flow heating was obtained at the expense of an equivalent reduction of the exergy of combustion products. The dependencies were obtained for increasing the range coefficient of cruise flight for different Mach numbers. The second part of the paper presents a mathematical model for the boost interval of the aircraft flight trajectory and the computational results for the reduction of fuel consumption at the boost trajectory for a given value of the energy supplied in front of the aircraft.

  8. Estimation of energetic efficiency of heat supply in front of the aircraft at supersonic accelerated flight. Part II. Mathematical model of the trajectory boost part and computational results

    NASA Astrophysics Data System (ADS)

    Latypov, A. F.

    2009-03-01

    The fuel economy was estimated at boost trajectory of aerospace plane during energy supply to the free stream. Initial and final velocities of the flight were given. A model of planning flight above cold air in infinite isobaric thermal wake was used. The comparison of fuel consumption was done at optimal trajectories. The calculations were done using a combined power plant consisting of ramjet and liquid-propellant engine. An exergy model was constructed in the first part of the paper for estimating the ramjet thrust and specific impulse. To estimate the aerodynamic drag of aircraft a quadratic dependence on aerodynamic lift is used. The energy for flow heating is obtained at the sacrifice of an equivalent decrease of exergy of combustion products. The dependencies are obtained for increasing the range coefficient of cruise flight at different Mach numbers. In the second part of the paper, a mathematical model is presented for the boost part of the flight trajectory of the flying vehicle and computational results for reducing the fuel expenses at the boost trajectory at a given value of the energy supplied in front of the aircraft.

  9. Cotton growth modeling and assessment using unmanned aircraft system visual-band imagery

    NASA Astrophysics Data System (ADS)

    Chu, Tianxing; Chen, Ruizhi; Landivar, Juan A.; Maeda, Murilo M.; Yang, Chenghai; Starek, Michael J.

    2016-07-01

    This paper explores the potential of using unmanned aircraft system (UAS)-based visible-band images to assess cotton growth. By applying the structure-from-motion algorithm, the cotton plant height (ph) and canopy cover (cc) information were retrieved from the point cloud-based digital surface models (DSMs) and orthomosaic images. Both UAS-based ph and cc follow a sigmoid growth pattern as confirmed by ground-based studies. By applying an empirical model that converts the cotton ph to cc, the estimated cc shows strong correlation (R2=0.990) with the observed cc. An attempt for modeling cotton yield was carried out using the ph and cc information obtained on June 26, 2015, the date when sigmoid growth curves for both ph and cc tended to decline in slope. In a cross-validation test, the correlation between the ground-measured yield and the estimated equivalent derived from the ph and/or cc was compared. Generally, combining ph and cc, the performance of the yield estimation is most comparable against the observed yield. On the other hand, the observed yield and cc-based estimation produce the second strongest correlation, regardless of the complexity of the models.

  10. Non-local Second Order Closure Scheme for Boundary Layer Turbulence and Convection

    NASA Astrophysics Data System (ADS)

    Meyer, Bettina; Schneider, Tapio

    2017-04-01

    There has been scientific consensus that the uncertainty in the cloud feedback remains the largest source of uncertainty in the prediction of climate parameters like climate sensitivity. To narrow down this uncertainty, not only a better physical understanding of cloud and boundary layer processes is required, but specifically the representation of boundary layer processes in models has to be improved. General climate models use separate parameterisation schemes to model the different boundary layer processes like small-scale turbulence, shallow and deep convection. Small scale turbulence is usually modelled by local diffusive parameterisation schemes, which truncate the hierarchy of moment equations at first order and use second-order equations only to estimate closure parameters. In contrast, the representation of convection requires higher order statistical moments to capture their more complex structure, such as narrow updrafts in a quasi-steady environment. Truncations of moment equations at second order may lead to more accurate parameterizations. At the same time, they offer an opportunity to take spatially correlated structures (e.g., plumes) into account, which are known to be important for convective dynamics. In this project, we study the potential and limits of local and non-local second order closure schemes. A truncation of the momentum equations at second order represents the same dynamics as a quasi-linear version of the equations of motion. We study the three-dimensional quasi-linear dynamics in dry and moist convection by implementing it in a LES model (PyCLES) and compare it to a fully non-linear LES. In the quasi-linear LES, interactions among turbulent eddies are suppressed but nonlinear eddy—mean flow interactions are retained, as they are in the second order closure. In physical terms, suppressing eddy—eddy interactions amounts to suppressing, e.g., interactions among convective plumes, while retaining interactions between plumes and the environment (e.g., entrainment and detrainment). In a second part, we employ the possibility to include non-local statistical correlations in a second-order closure scheme. Such non-local correlations allow to directly incorporate the spatially coherent structures that occur in the form of convective updrafts penetrating the boundary layer. This allows us to extend the work that has been done using assumed-PDF schemes for parameterising boundary layer turbulence and shallow convection in a non-local sense.

  11. Variables separation and superintegrability of the nine-dimensional MICZ-Kepler problem

    NASA Astrophysics Data System (ADS)

    Phan, Ngoc-Hung; Le, Dai-Nam; Thoi, Tuan-Quoc N.; Le, Van-Hoang

    2018-03-01

    The nine-dimensional MICZ-Kepler problem is of recent interest. This is a system describing a charged particle moving in the Coulomb field plus the field of a SO(8) monopole in a nine-dimensional space. Interestingly, this problem is equivalent to a 16-dimensional harmonic oscillator via the Hurwitz transformation. In the present paper, we report on the multiseparability, a common property of superintegrable systems, and the superintegrability of the problem. First, we show the solvability of the Schrödinger equation of the problem by the variables separation method in different coordinates. Second, based on the SO(10) symmetry algebra of the system, we construct explicitly a set of seventeen invariant operators, which are all in the second order of the momentum components, satisfying the condition of superintegrability. The found number 17 coincides with the prediction of (2n - 1) law of maximal superintegrability order in the case n = 9. Until now, this law is accepted to apply only to scalar Hamiltonian eigenvalue equations in n-dimensional space; therefore, our results can be treated as evidence that this definition of superintegrability may also apply to some vector equations such as the Schrödinger equation for the nine-dimensional MICZ-Kepler problem.

  12. Dose Equivalents for Second-Generation Antipsychotic Drugs: The Classical Mean Dose Method

    PubMed Central

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Patel, Maxine X.; Furukawa, Toshi; Cipriani, Andrea; Geddes, John; Davis, John M.

    2015-01-01

    Background: The concept of dose equivalence is important for many purposes. The classical approach published by Davis in 1974 subsequently dominated textbooks for several decades. It was based on the assumption that the mean doses found in flexible-dose trials reflect the average optimum dose which can be used for the calculation of dose equivalence. We are the first to apply the method to second-generation antipsychotics. Methods: We searched for randomized, double-blind, flexible-dose trials in acutely ill patients with schizophrenia that examined 13 oral second-generation antipsychotics, haloperidol, and chlorpromazine (last search June 2014). We calculated the mean doses of each drug weighted by sample size and divided them by the weighted mean olanzapine dose to obtain olanzapine equivalents. Results: We included 75 studies with 16 555 participants. The doses equivalent to 1 mg/d olanzapine were: amisulpride 38.3 mg/d, aripiprazole 1.4 mg/d, asenapine 0.9 mg/d, chlorpromazine 38.9 mg/d, clozapine 30.6 mg/d, haloperidol 0.7 mg/d, quetiapine 32.3mg/d, risperidone 0.4mg/d, sertindole 1.1 mg/d, ziprasidone 7.9 mg/d, zotepine 13.2 mg/d. For iloperidone, lurasidone, and paliperidone no data were available. Conclusions: The classical mean dose method is not reliant on the limited availability of fixed-dose data at the lower end of the effective dose range, which is the major limitation of “minimum effective dose methods” and “dose-response curve methods.” In contrast, the mean doses found by the current approach may have in part depended on the dose ranges chosen for the original trials. Ultimate conclusions on dose equivalence of antipsychotics will need to be based on a review of various methods. PMID:25841041

  13. Bigeye Bomb: Unresolved Development Issues

    DTIC Science & Technology

    1989-08-11

    the lethal agent formed inside the Bigeye bomb) at the purity or equivalent- biotoxicity level required by the Test and Eval- uation Master Plan (TEMP...during the 5 through 30-second period after mixing starts, with- out specifying either wheit the required purity or equivalent biotoxicity level should...revision of the purity or equivalent biotoxicity requirement. However, we believe that a bomb that produces lethal agent at the required purity level for 1

  14. Disabled vs nondisabled readers: perceptual vs higher-order processing of one vs three letters.

    PubMed

    Allegretti, C L; Puglisi, J T

    1986-10-01

    12 disabled and 12 nondisabled readers (mean age, 11 yr.) were compared on a letter-search task which separated perceptual processing from higher-order processing. Participants were presented a first stimulus (for 200 msec. to minimize eye movements) followed by a second stimulus immediately to estimate the amount of information initially perceived or after a 3000-msec. interval to examine information more permanently stored. Participants were required to decide whether any letter present in the first stimulus was also present in the second. Two processing loads (1 and 3 letters) were examined. Disabled readers showed more pronounced deficits when they were given very little time to process information or more information to process.

  15. Small-area snow surveys on the northern plains of North Dakota

    USGS Publications Warehouse

    Emerson, Douglas G.; Carroll, T.R.; Steppuhn, Harold

    1985-01-01

    Snow-cover data are needed for many facets of hydrology. The variation in snow cover over small areas is the focus of this study. The feasibility of using aerial surveys to obtain information on the snow water equivalent of the snow cover in order to minimize the necessity of labor intensive ground snow surveys was- evaluated. A low-flying aircraft was used to measure attenuations of natural terrestrial gamma radiation by snow cover. Aerial and ground snow surveys of eight 1-mile snow courses and one 4-mile snow course were used in the evaluation, with ground snow surveys used as the base to evaluate aerial data. Each of the 1-mile snow courses consisted of a single land use and all had the same terrain type (plane). The 4-mile snow course consists of a variety of land uses and the same terrain type (plane). Using the aerial snow-survey technique, the snow water equivalent of the 1-mile snow courses was. measured with three passes of the aircraft. Use of more than one pass did not improve the results. The mean absolute difference between the aerial- and ground-measured snow water equivalents for the 1-mile snow courses was 26 percent (0.77 inches). The aerial snow water equivalents determined for the 1-mile snow courses were used to estimate the variations in the snow water equivalents over the 4-mile snow course. The weighted mean absolute difference for the 4-mile snow course was 27 percent (0.8 inches). Variations in snow water equivalents could not be verified adequately by segmenting the aerial snow-survey data because of the uniformity found in the snow cover. On the 4-mile snow coirse, about two-thirds of the aerial snow-survey data agreed with the ground snow-survey data within the accuracy of the aerial technique ( + 0.5 inch of the mean snow water equivalent).

  16. Relativistic tests with lunar laser ranging

    NASA Astrophysics Data System (ADS)

    Hofmann, F.; Müller, J.

    2018-02-01

    This paper presents the recent version of the lunar laser ranging (LLR) analysis model at the Institut für Erdmessung (IfE), Leibniz Universität Hannover and highlights a few tests of Einstein’s theory of gravitation using LLR data. Investigations related to a possible temporal variation of the gravitational constant, the equivalence principle, the PPN parameters β and γ as well as the geodetic precession were carried out. The LLR analysis model was updated by gravitational effects of the Sun and planets with the Moon as extended body. The higher-order gravitational interaction between Earth and Moon as well as effects of the solid Earth tides on the lunar motion were refined. The basis for the modeled lunar rotation is now a 2-layer core/mantle model according to the DE430 ephemeris. The validity of Einstein’s theory was studied using this updated analysis model and an LLR data set from 1970 to January 2015. Within the estimated accuracies, no deviations from Einstein’s theory are detected. A relative temporal variation of the gravitational constant is estimated as \\dot{G}/G_0=(7.1+/-7.6)×10-14~yr-1 , the test of the equivalence principle gives Δ(m_g/m_i)EM=(-3+/-5)×10-14 and the Nordtvedt parameter \

  17. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  18. Estimate of higher order ionospheric errors in GNSS positioning

    NASA Astrophysics Data System (ADS)

    Hoque, M. Mainul; Jakowski, N.

    2008-10-01

    Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

  19. The manual control of vehicles undergoing slow transitions in dynamic characteristics

    NASA Technical Reports Server (NTRS)

    Moriarty, T. E.

    1974-01-01

    The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.

  20. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  1. BENTHIC MICROBIAL RESPIRATION IN APPALACHIAN MOUNTAIN, PIEDMONT, AND COASTAL PLAINS, STREAMS OF THE EASTERN USA

    EPA Science Inventory

    Our study had two objectives. First, in order to quantify the potential underestimation of community respiration caused by the exclusion of anaerobic processes, we compared benthic microbial respiration measured as 02 consumption with estimated based on DHA. Second, our previous ...

  2. Money for health: the equivalent variation of cardiovascular diseases.

    PubMed

    Groot, Wim; Van Den Brink, Henriëtte Maassen; Plug, Erik

    2004-09-01

    This paper introduces a new method to calculate the extent to which individuals are willing to trade money for improvements in their health status. An individual welfare function of income (WFI) is applied to calculate the equivalent income variation of health impairments. We believe that this approach avoids various drawbacks of alternative willingness-to-pay methods. The WFI is used to calculate the equivalent variation of cardiovascular diseases. It is found that for a 25 year old male the equivalent variation of a heart disease ranges from 114,000 euro to 380,000 euro depending on the welfare level. This is about 10,000 euro - 30,000 euro for an additional life year. The equivalent variation declines with age and is about the same for men and women. The estimates further vary by discount rate chosen. The estimates of the equivalent variation are generally higher than the money spent on most heart-related medical interventions per QALY. The cost-benefit analysis shows that for most interventions the value of the health benefits exceeds the costs. Heart transplants seem to be too costly and only beneficial if patients are young.

  3. MR images from fewer data

    NASA Astrophysics Data System (ADS)

    Vafadar, Bahareh; Bones, Philip J.

    2012-10-01

    There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.

  4. A theoretical framework for convergence and continuous dependence of estimates in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    Numerical techniques for parameter identification in distributed-parameter systems are developed analytically. A general convergence and stability framework (for continuous dependence on observations) is derived for first-order systems on the basis of (1) a weak formulation in terms of sesquilinear forms and (2) the resolvent convergence form of the Trotter-Kato approximation. The extension of this framework to second-order systems is considered.

  5. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  6. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  7. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  8. Delay correction model for estimating bus emissions at signalized intersections based on vehicle specific power distributions.

    PubMed

    Song, Guohua; Zhou, Xixi; Yu, Lei

    2015-05-01

    The intersection is one of the biggest emission points for buses and also the high exposure site for people. Several traffic performance indexes have been developed and widely used for intersection evaluations. However, few studies have focused on the relationship between these indexes and emissions at intersections. This paper intends to propose a model that relates emissions to the two commonly used measures of effectiveness (i.e. delay time and number of stops) by using bus activity data and emission data at intersections. First, with a large number of field instantaneous emission data and corresponding activity data collected by the Portable Emission Measurement System (PEMS), emission rates are derived for different vehicle specific power (VSP) bins. Then, 2002 sets of trajectory data, an equivalent of about 140,000 sets of second-by-second activity data, are obtained from Global Position Systems (GPSs)-equipped diesel buses in Beijing. The delay and the emission factors of each trajectory are estimated. Then, by using baseline emission factors for two types of intersections, e.g. the Arterial @ Arterial Intersection and the Arterial @ Collector, delay correction factors are calculated for the two types of intersections at different congestion levels. Finally, delay correction models are established for adjusting emission factors for each type of intersections and different numbers of stops. A comparative analysis between estimated and field emission factors demonstrates that the delay correction model is reliable. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Interferometric observations of an artificial satellite.

    PubMed

    Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A

    1972-10-27

    Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.

  10. Persistence and breakdown of strand symmetry in the human genome.

    PubMed

    Zhang, Shang-Hong

    2015-04-07

    Afreixo, V., Bastos, C.A.C., Garcia, S.P., Rodrigues, J.M.O.S., Pinho, A.J., Ferreira, P.J.S.G., 2013. The breakdown of the word symmetry in the human genome. J. Theor. Biol. 335, 153-159 analyzed the word symmetry (strand symmetry or the second parity rule) in the human genome. They concluded that strand symmetry holds for oligonucleotides up to 6 nt and is no longer statistically significant for oligonucleotides of higher orders. However, although they provided some new results for the issue, their interpretation would not be fully justified. Also, their conclusion needs to be further evaluated. Further analysis of their results, especially those of equivalence tests and word symmetry distance, shows that strand symmetry would persist for higher-order oligonucleotides up to 9 nt in the human genome, at least for its overall frequency framework (oligonucleotide frequency pattern). Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be

    The purpose of this communication is to point out the connection between a 1D quantum Hamiltonian involving the fourth Painlevé transcendent P{sub IV}, obtained in the context of second-order supersymmetric quantum mechanics and third-order ladder operators, with a hierarchy of families of quantum systems called k-step rational extensions of the harmonic oscillator and related with multi-indexed X{sub m{sub 1,m{sub 2,…,m{sub k}}}} Hermite exceptional orthogonal polynomials of type III. The connection between these exactly solvable models is established at the level of the equivalence of the Hamiltonians using rational solutions of the fourth Painlevé equation in terms of generalized Hermite andmore » Okamoto polynomials. We also relate the different ladder operators obtained by various combinations of supersymmetric constructions involving Darboux-Crum and Krein-Adler supercharges, their zero modes and the corresponding energies. These results will demonstrate and clarify the relation observed for a particular case in previous papers.« less

  12. Contracting-out in the United Kingdom: a partnership between social security and private pension plans.

    PubMed

    Daykin, Chris

    2002-01-01

    Contracting-out was introduced in the United Kingdom in 1978 as part of the arrangements for the State Earnings-Related Pension Scheme (SERPS) in order to avoid duplication with the existing well-developed defined benefit occupational pension plan sector. Members and sponsors of contracted-out schemes were able to save on their social security contributions in recognition of the fact that they were accruing equivalent benefits through an occupational pension plan. Later on this concept was extended to those with individual money purchase pension plans. This article considers a brief history of contracting-out, the principles of contracting-out, some problems associated with contracting-out, the implications of the introduction of stakeholder pensions and State Second Pension, and the latest rebate review and rebate orders. It examines how U.K. pensions policy since 1978 has been based on a partnership between social security and private pension plans.

  13. From the S U (2 ) quantum link model on the honeycomb lattice to the quantum dimer model on the kagome lattice: Phase transition and fractionalized flux strings

    NASA Astrophysics Data System (ADS)

    Banerjee, D.; Jiang, F.-J.; Olesen, T. Z.; Orland, P.; Wiese, U.-J.

    2018-05-01

    We consider the (2 +1 ) -dimensional S U (2 ) quantum link model on the honeycomb lattice and show that it is equivalent to a quantum dimer model on the kagome lattice. The model has crystalline confined phases with spontaneously broken translation invariance associated with pinwheel order, which is investigated with either a Metropolis or an efficient cluster algorithm. External half-integer non-Abelian charges [which transform nontrivially under the Z (2 ) center of the S U (2 ) gauge group] are confined to each other by fractionalized strings with a delocalized Z (2 ) flux. The strands of the fractionalized flux strings are domain walls that separate distinct pinwheel phases. A second-order phase transition in the three-dimensional Ising universality class separates two confining phases: one with correlated pinwheel orientations, and the other with uncorrelated pinwheel orientations.

  14. More on the elongational viscosity of an oriented fiber assembly

    NASA Technical Reports Server (NTRS)

    Pipes, R. Byron, Jr.; Beaussart, A. J.; Okine, R. K.

    1990-01-01

    The effective elongational viscosity for an oriented fiber assembly of discontinuous fibers suspended in a viscous matrix fluid is developed for a fiber array with variable overlap length of both symmetric and asymmetric geometries. Further, the relation is developed for a power-law matrix fluid with finite yield stress. The developed relations for a Newtonian fluid reveal that the influence of overlap length upon elongational viscosity may be expressed as a polynomial of second order. The results for symmetric and asymmetric geometries are shown to be equivalent. Finally, for the power-law fluid the influence of fiber aspect ratio on elongational viscosity was shown to be of order m + 1, where m is greater than 0 and less than 1, as compared to 2 for the Newtonian fluid, while the effective yield stress was found to be proportional to the fiber aspect ratio and volume fraction.

  15. Effectiveness of Ebola treatment units and community care centers - Liberia, September 23-October 31, 2014.

    PubMed

    Washington, Michael L; Meltzer, Martin L

    2015-01-30

    Previous reports have shown that an Ebola outbreak can be slowed, and eventually stopped, by placing Ebola patients into settings where there is reduced risk for onward Ebola transmission, such as Ebola treatment units (ETUs) and community care centers (CCCs) or equivalent community settings that encourage changes in human behaviors to reduce transmission risk, such as making burials safe and reducing contact with Ebola patients. Using cumulative case count data from Liberia up to August 28, 2014, the EbolaResponse model previously estimated that without any additional interventions or further changes in human behavior, there would have been approximately 23,000 reported Ebola cases by October 31, 2014. In actuality, there were 6,525 reported cases by that date. To estimate the effectiveness of ETUs and CCCs or equivalent community settings in preventing greater Ebola transmission, CDC applied the EbolaResponse model to the period September 23-October 31, 2014, in Liberia. The results showed that admitting Ebola patients to ETUs alone prevented an estimated 2,244 Ebola cases. Having patients receive care in CCCs or equivalent community settings with a reduced risk for Ebola transmission prevented an estimated 4,487 cases. Having patients receive care in either ETUs or CCCs or in equivalent community settings, prevented an estimated 9,100 cases, apparently as the result of a synergistic effect in which the impact of the combined interventions was greater than the sum of the two interventions. Caring for patients in ETUs, CCCs, or in equivalent community settings with reduced risk for transmission can be important components of a successful public health response to an Ebola epidemic.

  16. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    PubMed

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  17. Optical nonlinearity in gelatin layer film containing Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Hirose, Tomohiro; Arisawa, Michiko; Omatsu, Takashige; Kuge, Ken'ichi; Hasegawa, Akira; Tateda, Mitsuhiro

    2002-09-01

    We demonstrate a novel technique to fabricate a gelatin film containing Au-nano-particles. The technique is based on silver halide photographic development. We investigated third-order non-linearity of the film by forward-four-wave-mixing technique. Peak absorption appeared at the wavelength of 560nm. Self-diffraction by the use of third order nonlinear grating formed by intense pico-second pulses was observed. Experimental diffraction efficiency was proportional to the square of the pump intensity. Third-order susceptibility c(3) of the film was estimated to be 1.8?~10^-7esu.

  18. Local energy flux estimates for unstable conditions using variance data in semiarid rangelands

    USGS Publications Warehouse

    Kustas, William P.; Blanford, J.H.; Stannard, D.I.; Daughtry, C.S.T.; Nichols, W.D.; Weltz, M.A.

    1994-01-01

    A network of meteorological stations was installed during the Monsoon '90 field campaign in the Walnut Gulch experimental watershed. The study area has a fairly complex surface. The vegetation cover is heterogeneous and sparse, and the terrain is mildly hilly, but dissected by ephemeral channels. Besides measurement of some of the standard weather data such as wind speed, air temperature, and solar radiation, these sites also contained instruments for estimating the local surface energy balance. The approach utilized measurements of net radiation (Rn), soil heat flux (G) and Monin-Obukhov similarity theory applied to first- and second-order turbulent statistics of wind speed and temperature for determining the sensible heat flux (H). The latent heat flux (LE) was solved as a residual in the surface energy balance equation, namely, LE = −(Rn + G + H). This procedure (VAR-RESID) for estimating the energy fluxes satisfied monetary constraints and the requirement for low maintenance and continued operation through the harsh environmental conditions experienced in semiarid regions. Comparison of energy fluxes using this approach with more traditional eddy correlation techniques showed differences were within 20% under unstable conditions. Similar variability in flux estimates over the study area was present in the eddy correlation data. Hence, estimates of H and LE using the VAR-RESID approach under unstable conditions were considered satisfactory. Also, with second-order statistics of vertical velocity collected at several sites, the local momentum roughness length was estimated. This is an important parameter used in modeling the turbulent transfer of momentum and sensible heat fluxes across the surface-atmosphere interface.

  19. 14 CFR 25.335 - Design airspeeds.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... cruise speed (knots equivalent airspeed); Uref=the reference gust velocity (feet per second equivalent... control of airspeed and for transition from one flap position to another. (2) If an automatic flap... speed recommended for the operation of the device to allow for probable variations in speed control. For...

  20. 14 CFR 25.335 - Design airspeeds.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... cruise speed (knots equivalent airspeed); Uref=the reference gust velocity (feet per second equivalent... control of airspeed and for transition from one flap position to another. (2) If an automatic flap... speed recommended for the operation of the device to allow for probable variations in speed control. For...

  1. 14 CFR 25.335 - Design airspeeds.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... cruise speed (knots equivalent airspeed); Uref=the reference gust velocity (feet per second equivalent... control of airspeed and for transition from one flap position to another. (2) If an automatic flap... speed recommended for the operation of the device to allow for probable variations in speed control. For...

  2. 14 CFR 25.335 - Design airspeeds.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... cruise speed (knots equivalent airspeed); Uref=the reference gust velocity (feet per second equivalent... control of airspeed and for transition from one flap position to another. (2) If an automatic flap... speed recommended for the operation of the device to allow for probable variations in speed control. For...

  3. Use of a "Super-child" Approach to Assess the Vitamin A Equivalence of Moringa oleifera Leaves, Develop a Compartmental Model for Vitamin A Kinetics, and Estimate Vitamin A Total Body Stores in Young Mexican Children.

    PubMed

    Lopez-Teros, Veronica; Ford, Jennifer Lynn; Green, Michael H; Tang, Guangwen; Grusak, Michael A; Quihui-Cota, Luis; Muzhingi, Tawanda; Paz-Cassini, Mariela; Astiazaran-Garcia, Humberto

    2017-12-01

    Background: Worldwide, an estimated 250 million children <5 y old are vitamin A (VA) deficient. In Mexico, despite ongoing efforts to reduce VA deficiency, it remains an important public health problem; thus, food-based interventions that increase the availability and consumption of provitamin A-rich foods should be considered. Objective: The objectives were to assess the VA equivalence of 2 H-labeled Moringa oleifera (MO) leaves and to estimate both total body stores (TBS) of VA and plasma retinol kinetics in young Mexican children. Methods: β-Carotene was intrinsically labeled by growing MO plants in a 2 H 2 O nutrient solution. Fifteen well-nourished children (17-35 mo old) consumed puréed MO leaves (1 mg β-carotene) and a reference dose of [ 13 C 10 ]retinyl acetate (1 mg) in oil. Blood (2 samples/child) was collected 10 times (2 or 3 children each time) over 35 d. The bioefficacy of MO leaves was calculated from areas under the composite "super-child" plasma isotope response curves, and MO VA equivalence was estimated through the use of these values; a compartmental model was developed to predict VA TBS and retinol kinetics through the use of composite plasma [ 13 C 10 ]retinol data. TBS were also estimated with isotope dilution. Results: The relative bioefficacy of β-carotene retinol activity equivalents from MO was 28%; VA equivalence was 3.3:1 by weight (0.56 μmol retinol:1 μmol β-carotene). Kinetics of plasma retinol indicate more rapid plasma appearance and turnover and more extensive recycling in these children than are observed in adults. Model-predicted mean TBS (823 μmol) was similar to values predicted using a retinol isotope dilution equation applied to data from 3 to 6 d after dosing (mean ± SD: 832 ± 176 μmol; n = 7). Conclusions: The super-child approach can be used to estimate population carotenoid bioefficacy and VA equivalence, VA status, and parameters of retinol metabolism from a composite data set. Our results provide initial estimates of retinol kinetics in well-nourished young children with adequate VA stores and demonstrate that MO leaves may be an important source of VA. © 2017 American Society for Nutrition.

  4. An efficient sensitivity analysis method for modified geometry of Macpherson suspension based on Pearson correlation coefficient

    NASA Astrophysics Data System (ADS)

    Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh

    2017-06-01

    The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.

  5. Space radiation dose analysis for solar flare of August 1989

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Simonsen, Lisa C.; Sauer, Herbert H.; Wilson, John W.; Townsend, Lawrence W.

    1990-01-01

    Potential dose and dose rate levels to astronauts in deep space are predicted for the solar flare event which occurred during the week of August 13, 1989. The Geostationary Operational Environmental Satellite (GOES-7) monitored the temporal development and energy characteristics of the protons emitted during this event. From these data, differential fluence as a function of energy was obtained in order to analyze the flare using the Langley baryon transport code, BRYNTRN, which describes the interactions of incident protons in matter. Dose equivalent estimates for the skin, ocular lens, and vital organs for 0.5 to 20 g/sq cm of aluminum shielding were predicted. For relatively light shielding (less than 2 g/sq cm), the skin and ocular lens 30-day exposure limits are exceeded within several hours of flare onset. The vital organ (5 cm depth) dose equivalent is exceeded only for the thinnest shield (0.5 g/sq cm). Dose rates (rem/hr) for the skin, ocular lens, and vital organs are also computed.

  6. Space radiation dose analysis for solar flare of August 1989

    NASA Astrophysics Data System (ADS)

    Nealy, John E.; Simonsen, Lisa C.; Sauer, Herbert H.; Wilson, John W.; Townsend, Lawrence W.

    1990-12-01

    Potential dose and dose rate levels to astronauts in deep space are predicted for the solar flare event which occurred during the week of August 13, 1989. The Geostationary Operational Environmental Satellite (GOES-7) monitored the temporal development and energy characteristics of the protons emitted during this event. From these data, differential fluence as a function of energy was obtained in order to analyze the flare using the Langley baryon transport code, BRYNTRN, which describes the interactions of incident protons in matter. Dose equivalent estimates for the skin, ocular lens, and vital organs for 0.5 to 20 g/sq cm of aluminum shielding were predicted. For relatively light shielding (less than 2 g/sq cm), the skin and ocular lens 30-day exposure limits are exceeded within several hours of flare onset. The vital organ (5 cm depth) dose equivalent is exceeded only for the thinnest shield (0.5 g/sq cm). Dose rates (rem/hr) for the skin, ocular lens, and vital organs are also computed.

  7. Tests of gravity with future space-based experiments

    NASA Astrophysics Data System (ADS)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  8. Second-degree Stokes coefficients from multi-satellite SLR

    NASA Astrophysics Data System (ADS)

    Bloßfeld, Mathis; Müller, Horst; Gerstl, Michael; Štefka, Vojtěch; Bouman, Johannes; Göttl, Franziska; Horwath, Martin

    2015-09-01

    The long wavelength part of the Earth's gravity field can be determined, with varying accuracy, from satellite laser ranging (SLR). In this study, we investigate the combination of up to ten geodetic SLR satellites using iterative variance component estimation. SLR observations to different satellites are combined in order to identify the impact of each satellite on the estimated Stokes coefficients. The combination of satellite-specific weekly or monthly arcs allows to reduce parameter correlations of the single-satellite solutions and leads to alternative estimates of the second-degree Stokes coefficients. This alternative time series might be helpful for assessing the uncertainty in the impact of the low-degree Stokes coefficients on geophysical investigations. In order to validate the obtained time series of second-degree Stokes coefficients, a comparison with the SLR RL05 time series of the Center of Space Research (CSR) is done. This investigation shows that all time series are comparable to the CSR time series. The precision of the weekly/monthly and coefficients is analyzed by comparing mass-related equatorial excitation functions with geophysical model results and reduced geodetic excitation functions. In case of , the annual amplitude and phase of the DGFI solution agrees better with three of four geophysical model combinations than other time series. In case of , all time series agree very well to each other. The impact of on the ice mass trend estimates for Antarctica are compared based on CSR GRACE RL05 solutions, in which different monthly time series are used for replacing. We found differences in the long-term Antarctic ice loss of Gt/year between the GRACE solutions induced by the different SLR time series of CSR and DGFI, which is about 13 % of the total ice loss of Antarctica. This result shows that Antarctic ice mass loss quantifications must be carefully interpreted.

  9. Improving accuracy of portion-size estimations through a stimulus equivalence paradigm.

    PubMed

    Hausman, Nicole L; Borrero, John C; Fisher, Alyssa; Kahng, SungWoo

    2014-01-01

    The prevalence of obesity continues to increase in the United States (Gordon-Larsen, The, & Adair, 2010). Obesity can be attributed, in part, to overconsumption of energy-dense foods. Given that overeating plays a role in the development of obesity, interventions that teach individuals to identify and consume appropriate portion sizes are warranted. Specifically, interventions that teach individuals to estimate portion sizes correctly without the use of aids may be critical to the success of nutrition education programs. The current study evaluated the use of a stimulus equivalence paradigm to teach 9 undergraduate students to estimate portion size accurately. Results suggested that the stimulus equivalence paradigm was effective in teaching participants to make accurate portion size estimations without aids, and improved accuracy was observed in maintenance sessions that were conducted 1 week after training. Furthermore, 5 of 7 participants estimated the target portion size of novel foods during extension sessions. These data extend existing research on teaching accurate portion-size estimations and may be applicable to populations who seek treatment (e.g., overweight or obese children and adults) to teach healthier eating habits. © Society for the Experimental Analysis of Behavior.

  10. Corrigendum to "Monte Carlo simulations of the secondary neutron ambient and effective dose equivalent rates from surface to suborbital altitudes and low Earth orbit".

    PubMed

    El-Jaby, Samy

    2016-06-01

    A recent paper published in Life Sciences in Space Research (El-Jaby and Richardson, 2015) presented estimates of the secondary neutron ambient and effective dose equivalent rates, in air, from surface altitudes up to suborbital altitudes and low Earth orbit. These estimates were based on MCNPX (LANL, 2011) (Monte Carlo N-Particle eXtended) radiation transport simulations of galactic cosmic radiation passing through Earth's atmosphere. During a recent review of the input decks used for these simulations, a systematic error was discovered that is addressed here. After reassessment, the neutron ambient and effective dose equivalent rates estimated are found to be 10 to 15% different, though, the essence of the conclusions drawn remains unchanged. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  11. Comparison of Estimation Techniques for the Four Parameter Beta Distribution.

    DTIC Science & Technology

    1981-12-01

    Gnanadesikan , Pinkham, and Hughes in 1967 (Ref 11). It dealt with the standard beta, and performed the estimation using smallest order statistics. 22 22 The...convergence of the iterative scheme" (Ref 11:611). Beckman and Tietjen picked up on Gnanadesikan , et al., and developed a solution method which is "fast...zia (Second Edition). Reading, Massachusetts: Addison-Wesley Pub- lishing Company, 1978. 11. Gnanadesikan , R., R. S. Pinkham and L. P. Hughes. "Maximum

  12. Assessment of radiation doses from residential smoke detectors that contain americium-241

    NASA Astrophysics Data System (ADS)

    Odonnell, F. R.; Etnier, E. L.; Holton, G. A.; Travis, C. C.

    1981-10-01

    External dose equivalents and internal dose commitments were estimated for individuals and populations from annual distribution, use, and disposal of 10 million ionization chamber smoke detectors that contain 110 kBq americium-241 each. Under exposure scenarios developed for normal distribution, use, and disposal using the best available information, annual external dose equivalents to average individuals were estimated to range from 4 fSv to 20 nSv for total body and from 7 fSv to 40 nSv for bone. Internal dose commitments to individuals under post disposal scenarios were estimated to range from 0.006 to 80 micro-Sv (0.0006 to 8 mrem) to total body and from 0.06 to 800 micro-Sv to bone. The total collective dose (the sum of external dose equivalents and 50-year internal dose commitments) for all individuals involved with distribution, use, or disposal of 10 million smoke detectors was estimated to be about 0.38 person-Sv (38 person-rem) to total body and 00 ft squared.

  13. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    PubMed

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  14. Carbon sequestration potential of second-growth forest regeneration in the Latin American tropics

    PubMed Central

    Chazdon, Robin L.; Broadbent, Eben N.; Rozendaal, Danaë M. A.; Bongers, Frans; Zambrano, Angélica María Almeyda; Aide, T. Mitchell; Balvanera, Patricia; Becknell, Justin M.; Boukili, Vanessa; Brancalion, Pedro H. S.; Craven, Dylan; Almeida-Cortez, Jarcilene S.; Cabral, George A. L.; de Jong, Ben; Denslow, Julie S.; Dent, Daisy H.; DeWalt, Saara J.; Dupuy, Juan M.; Durán, Sandra M.; Espírito-Santo, Mario M.; Fandino, María C.; César, Ricardo G.; Hall, Jefferson S.; Hernández-Stefanoni, José Luis; Jakovac, Catarina C.; Junqueira, André B.; Kennard, Deborah; Letcher, Susan G.; Lohbeck, Madelon; Martínez-Ramos, Miguel; Massoca, Paulo; Meave, Jorge A.; Mesquita, Rita; Mora, Francisco; Muñoz, Rodrigo; Muscarella, Robert; Nunes, Yule R. F.; Ochoa-Gaona, Susana; Orihuela-Belmonte, Edith; Peña-Claros, Marielos; Pérez-García, Eduardo A.; Piotto, Daniel; Powers, Jennifer S.; Rodríguez-Velazquez, Jorge; Romero-Pérez, Isabel Eunice; Ruíz, Jorge; Saldarriaga, Juan G.; Sanchez-Azofeifa, Arturo; Schwartz, Naomi B.; Steininger, Marc K.; Swenson, Nathan G.; Uriarte, Maria; van Breugel, Michiel; van der Wal, Hans; Veloso, Maria D. M.; Vester, Hans; Vieira, Ima Celia G.; Bentos, Tony Vizcarra; Williamson, G. Bruce; Poorter, Lourens

    2016-01-01

    Regrowth of tropical secondary forests following complete or nearly complete removal of forest vegetation actively stores carbon in aboveground biomass, partially counterbalancing carbon emissions from deforestation, forest degradation, burning of fossil fuels, and other anthropogenic sources. We estimate the age and spatial extent of lowland second-growth forests in the Latin American tropics and model their potential aboveground carbon accumulation over four decades. Our model shows that, in 2008, second-growth forests (1 to 60 years old) covered 2.4 million km2 of land (28.1% of the total study area). Over 40 years, these lands can potentially accumulate a total aboveground carbon stock of 8.48 Pg C (petagrams of carbon) in aboveground biomass via low-cost natural regeneration or assisted regeneration, corresponding to a total CO2 sequestration of 31.09 Pg CO2. This total is equivalent to carbon emissions from fossil fuel use and industrial processes in all of Latin America and the Caribbean from 1993 to 2014. Ten countries account for 95% of this carbon storage potential, led by Brazil, Colombia, Mexico, and Venezuela. We model future land-use scenarios to guide national carbon mitigation policies. Permitting natural regeneration on 40% of lowland pastures potentially stores an additional 2.0 Pg C over 40 years. Our study provides information and maps to guide national-level forest-based carbon mitigation plans on the basis of estimated rates of natural regeneration and pasture abandonment. Coupled with avoided deforestation and sustainable forest management, natural regeneration of second-growth forests provides a low-cost mechanism that yields a high carbon sequestration potential with multiple benefits for biodiversity and ecosystem services. PMID:27386528

  15. Carbon sequestration potential of second-growth forest regeneration in the Latin American tropics.

    PubMed

    Chazdon, Robin L; Broadbent, Eben N; Rozendaal, Danaë M A; Bongers, Frans; Zambrano, Angélica María Almeyda; Aide, T Mitchell; Balvanera, Patricia; Becknell, Justin M; Boukili, Vanessa; Brancalion, Pedro H S; Craven, Dylan; Almeida-Cortez, Jarcilene S; Cabral, George A L; de Jong, Ben; Denslow, Julie S; Dent, Daisy H; DeWalt, Saara J; Dupuy, Juan M; Durán, Sandra M; Espírito-Santo, Mario M; Fandino, María C; César, Ricardo G; Hall, Jefferson S; Hernández-Stefanoni, José Luis; Jakovac, Catarina C; Junqueira, André B; Kennard, Deborah; Letcher, Susan G; Lohbeck, Madelon; Martínez-Ramos, Miguel; Massoca, Paulo; Meave, Jorge A; Mesquita, Rita; Mora, Francisco; Muñoz, Rodrigo; Muscarella, Robert; Nunes, Yule R F; Ochoa-Gaona, Susana; Orihuela-Belmonte, Edith; Peña-Claros, Marielos; Pérez-García, Eduardo A; Piotto, Daniel; Powers, Jennifer S; Rodríguez-Velazquez, Jorge; Romero-Pérez, Isabel Eunice; Ruíz, Jorge; Saldarriaga, Juan G; Sanchez-Azofeifa, Arturo; Schwartz, Naomi B; Steininger, Marc K; Swenson, Nathan G; Uriarte, Maria; van Breugel, Michiel; van der Wal, Hans; Veloso, Maria D M; Vester, Hans; Vieira, Ima Celia G; Bentos, Tony Vizcarra; Williamson, G Bruce; Poorter, Lourens

    2016-05-01

    Regrowth of tropical secondary forests following complete or nearly complete removal of forest vegetation actively stores carbon in aboveground biomass, partially counterbalancing carbon emissions from deforestation, forest degradation, burning of fossil fuels, and other anthropogenic sources. We estimate the age and spatial extent of lowland second-growth forests in the Latin American tropics and model their potential aboveground carbon accumulation over four decades. Our model shows that, in 2008, second-growth forests (1 to 60 years old) covered 2.4 million km(2) of land (28.1% of the total study area). Over 40 years, these lands can potentially accumulate a total aboveground carbon stock of 8.48 Pg C (petagrams of carbon) in aboveground biomass via low-cost natural regeneration or assisted regeneration, corresponding to a total CO2 sequestration of 31.09 Pg CO2. This total is equivalent to carbon emissions from fossil fuel use and industrial processes in all of Latin America and the Caribbean from 1993 to 2014. Ten countries account for 95% of this carbon storage potential, led by Brazil, Colombia, Mexico, and Venezuela. We model future land-use scenarios to guide national carbon mitigation policies. Permitting natural regeneration on 40% of lowland pastures potentially stores an additional 2.0 Pg C over 40 years. Our study provides information and maps to guide national-level forest-based carbon mitigation plans on the basis of estimated rates of natural regeneration and pasture abandonment. Coupled with avoided deforestation and sustainable forest management, natural regeneration of second-growth forests provides a low-cost mechanism that yields a high carbon sequestration potential with multiple benefits for biodiversity and ecosystem services.

  16. Assimilating Merged Remote Sensing and Ground based Snowpack Information for Runoff Simulation and Forecasting using Hydrological Models

    NASA Astrophysics Data System (ADS)

    Infante Corona, J. A.; Lakhankar, T.; Khanbilvardi, R.; Pradhanang, S. M.

    2013-12-01

    Stream flow estimation and flood prediction influenced by snow melting processes have been studied for the past couple of decades because of their destruction potential, money losses and demises. It has been observed that snow, that was very stationary during its seasons, now is variable in shorter time-scales (daily and hourly) and rapid snowmelt can contribute or been the cause of floods. Therefore, good estimates of snowpack properties on ground are necessary in order to have an accurate prediction of these destructive events. The snow thermal model (SNTHERM) is a 1-dimensional model that analyzes the snowpack properties given the climatological conditions of a particular area. Gridded data from both, in-situ meteorological observations and remote sensing data will be produced using interpolation methods; thus, snow water equivalent (SWE) and snowmelt estimations can be obtained. The soil and water assessment tool (SWAT) is a hydrological model capable of predicting runoff quantity and quality of a watershed given its main physical and hydrological properties. The results from SNTHERM will be used as an input for SWAT in order to have simulated runoff under snowmelt conditions. This project attempts to improve the river discharge estimation considering both, excess rainfall runoff and the snow melting process. Obtaining a better estimation of the snowpack properties and evolution is expected. A coupled use of SNTHERM and SWAT based on meteorological in situ and remote sensed data will improve the temporal and spatial resolution of the snowpack characterization and river discharge estimations, and thus flood prediction.

  17. Global civil aviation black carbon emissions.

    PubMed

    Stettler, Marc E J; Boies, Adam M; Petzold, Andreas; Barrett, Steven R H

    2013-09-17

    Aircraft black carbon (BC) emissions contribute to climate forcing, but few estimates of BC emitted by aircraft at cruise exist. For the majority of aircraft engines the only BC-related measurement available is smoke number (SN)-a filter based optical method designed to measure near-ground plume visibility, not mass. While the first order approximation (FOA3) technique has been developed to estimate BC mass emissions normalized by fuel burn [EI(BC)] from SN, it is shown that it underestimates EI(BC) by >90% in 35% of directly measured cases (R(2) = -0.10). As there are no plans to measure BC emissions from all existing certified engines-which will be in service for several decades-it is necessary to estimate EI(BC) for existing aircraft on the ground and at cruise. An alternative method, called FOX, that is independent of the SN is developed to estimate BC emissions. Estimates of EI(BC) at ground level are significantly improved (R(2) = 0.68), whereas estimates at cruise are within 30% of measurements. Implementing this approach for global civil aviation estimated aircraft BC emissions are revised upward by a factor of ~3. Direct radiative forcing (RF) due to aviation BC emissions is estimated to be ~9.5 mW/m(2), equivalent to ~1/3 of the current RF due to aviation CO2 emissions.

  18. PROJECTING THE BIOLOGICAL CONDITION OF STREAMS UNDER ALTERNATIVE SCENARIOS OF HUMAN LAND USE

    EPA Science Inventory

    We present empirical models for estimating the status of fish and aquatic invertebrate communities in all second to fourth-order streams (1:100,000 scale; total stream length = 6476 km) throughout the Willamette River Basin, Oregon. The models project fish and invertebrate status...

  19. A Note on the Computation of the Second-Order Derivatives of the Elementary Symmetric Functions in the Rasch Model.

    ERIC Educational Resources Information Center

    Formann, Anton K.

    1986-01-01

    It is shown that for equal parameters explicit formulas exist, facilitating the application of the Newton-Raphson procedure to estimate the parameters in the Rasch model and related models according to the conditional maximum likelihood principle. (Author/LMO)

  20. Synthesized tissue-equivalent dielectric phantoms using salt and polyvinylpyrrolidone solutions.

    PubMed

    Ianniello, Carlotta; de Zwart, Jacco A; Duan, Qi; Deniz, Cem M; Alon, Leeor; Lee, Jae-Seung; Lattanzi, Riccardo; Brown, Ryan

    2018-07-01

    To explore the use of polyvinylpyrrolidone (PVP) for simulated materials with tissue-equivalent dielectric properties. PVP and salt were used to control, respectively, relative permittivity and electrical conductivity in a collection of 63 samples with a range of solute concentrations. Their dielectric properties were measured with a commercial probe and fitted to a 3D polynomial in order to establish an empirical recipe. The material's thermal properties and MR spectra were measured. The empirical polynomial recipe (available at https://www.amri.ninds.nih.gov/cgi-bin/phantomrecipe) provides the PVP and salt concentrations required for dielectric materials with permittivity and electrical conductivity values between approximately 45 and 78, and 0.1 to 2 siemens per meter, respectively, from 50 MHz to 4.5 GHz. The second- (solute concentrations) and seventh- (frequency) order polynomial recipe provided less than 2.5% relative error between the measured and target properties. PVP side peaks in the spectra were minor and unaffected by temperature changes. PVP-based phantoms are easy to prepare and nontoxic, and their semitransparency makes air bubbles easy to identify. The polymer can be used to create simulated material with a range of dielectric properties, negligible spectral side peaks, and long T 2 relaxation time, which are favorable in many MR applications. Magn Reson Med 80:413-419, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

Top