Sample records for posteriori pointwise error

  1. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  2. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  3. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  4. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  5. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  6. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  7. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  8. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  9. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  10. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  11. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  12. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  13. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  14. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  15. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  16. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  17. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  18. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  19. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  20. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  1. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  2. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  3. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    NASA Astrophysics Data System (ADS)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  4. Evaluation of the impact of observations on blended sea surface winds in a two-dimensional variational scheme using degrees of freedom

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Xiang, Jie; Fei, Jianfang; Wang, Yi; Liu, Chunxia; Li, Yuanxiang

    2017-12-01

    This paper presents an evaluation of the observational impacts on blended sea surface winds from a two-dimensional variational data assimilation (2D-Var) scheme. We begin by briefly introducing the analysis sensitivity with respect to observations in variational data assimilation systems and its relationship with the degrees of freedom for signal (DFS), and then the DFS concept is applied to the 2D-Var sea surface wind blending scheme. Two methods, a priori and a posteriori, are used to estimate the DFS of the zonal ( u) and meridional ( v) components of winds in the 2D-Var blending scheme. The a posteriori method can obtain almost the same results as the a priori method. Because only by-products of the blending scheme are used for the a posteriori method, the computation time is reduced significantly. The magnitude of the DFS is critically related to the observational and background error statistics. Changing the observational and background error variances can affect the DFS value. Because the observation error variances are assumed to be uniform, the observational influence at each observational location is related to the background error variance, and the observations located at the place where there are larger background error variances have larger influences. The average observational influence of u and v with respect to the analysis is about 40%, implying that the background influence with respect to the analysis is about 60%.

  5. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  6. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  7. Using meta-information of a posteriori Bayesian solutions of the hypocentre location task for improving accuracy of location error estimation

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2015-06-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analysed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. Although estimating of the earthquake foci location is relatively simple, a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling and a priori uncertainties. In this paper, we addressed this task when statistics of observational and/or modelling errors are unknown. This common situation requires introduction of a priori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland, we propose an approach based on an analysis of Shanon's entropy calculated for the a posteriori distribution. We show that this meta-characteristic of the a posteriori distribution carries some information on uncertainties of the solution found.

  8. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  9. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Brown, Judith Alice

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  10. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE PAGES

    Bishop, Joseph E.; Brown, Judith Alice

    2018-06-15

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  11. Stochastic Surface Mesh Reconstruction

    NASA Astrophysics Data System (ADS)

    Ozendi, M.; Akca, D.; Topan, H.

    2018-05-01

    A generic and practical methodology is presented for 3D surface mesh reconstruction from the terrestrial laser scanner (TLS) derived point clouds. It has two main steps. The first step deals with developing an anisotropic point error model, which is capable of computing the theoretical precisions of 3D coordinates of each individual point in the point cloud. The magnitude and direction of the errors are represented in the form of error ellipsoids. The following second step is focused on the stochastic surface mesh reconstruction. It exploits the previously determined error ellipsoids by computing a point-wise quality measure, which takes into account the semi-diagonal axis length of the error ellipsoid. The points only with the least errors are used in the surface triangulation. The remaining ones are automatically discarded.

  12. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  13. B-spline goal-oriented error estimators for geometrically nonlinear rods

    DTIC Science & Technology

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  14. An Astronomical Test of CCD Photometric Precision

    NASA Technical Reports Server (NTRS)

    Koch, David; Dunham, Edward; Borucki, William; Jenkins, Jon; DeVingenzi, D. (Technical Monitor)

    1998-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques. we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  15. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  16. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  17. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  18. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.

  19. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  20. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  1. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  2. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  3. Matrix metalloproteinases and educational attainment in refractive error: evidence of gene-environment interactions in the AREDS study

    PubMed Central

    Wojciechowski, Robert; Yee, Stephanie S.; Simpson, Claire L.; Bailey-Wilson, Joan E.; Stambolian, Dwight

    2012-01-01

    Purpose A previous study of Old Order Amish families has shown association of ocular refraction with markers proximal to matrix metalloproteinase (MMP) genes MMP1 and MMP10 and intragenic to MMP2. We conducted a candidate gene replication study of association between refraction and single nucleotide polymorphisms (SNPs) within these genomic regions. Design Candidate gene genetic association study. Participants 2,000 participants drawn from the Age Related Eye Disease Study (AREDS) were chosen for genotyping. After quality control filtering, 1912 individuals were available for analysis. Methods Microarray genotyping was performed using the HumanOmni 2.5 bead array. SNPs originally typed in the previous Amish association study were extracted for analysis. In addition, haplotype tagging SNPs were genotyped using TaqMan assays. Quantitative trait association analyses of mean spherical equivalent refraction (MSE) were performed on 30 markers using linear regression models and an additive genetic risk model, while adjusting for age, sex, education, and population substructure. Post-hoc analyses were performed after stratifying on a dichotomous education variable. Pointwise (P-emp) and multiple-test study-wise (P-multi) significance levels were calculated empirically through permutation. Main outcome measures MSE was used as a quantitative measure of ocular refraction. Results The mean age and ocular refraction were 68 years (SD=4.7) and +0.55 D (SD=2.14), respectively. Pointwise statistical significance was obtained for rs1939008 (P-emp=0.0326). No SNP attained statistical significance after correcting for multiple testing. In stratified analyses, multiple SNPs reached pointwise significance in the lower-education group: 2 of these were statistically significant after multiple testing correction. The two highest-ranking SNPs in Amish families (rs1939008 and rs9928731) showed pointwise P-emp<0.01 in the lower-education stratum of AREDS participants. Conclusions We show suggestive evidence of replication of an association signal for ocular refraction to a marker between MMP1 and MMP10. We also provide evidence of a gene-environment interaction between previously-reported markers and education on refractive error. Variants in MMP1- MMP10 and MMP2 regions appear to affect population variation in ocular refraction in environmental conditions less favorable for myopia development. PMID:23098370

  4. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  5. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.

  6. High order cell-centered scheme totally based on cell average

    NASA Astrophysics Data System (ADS)

    Liu, Ze-Yu; Cai, Qing-Dong

    2018-05-01

    This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.

  7. Analysis of the Efficiency of an A-Posteriori Error Estimator for Linear Triangular Finite Elements

    DTIC Science & Technology

    1991-06-01

    Release 1.0, NOETIC Tech. Corp., St. Louis, Missouri, 1985. [28] R. VERFURTH, FEMFLOW-user guide. Version 1, Report, Universitiit Zirich, 1989. [29] R... study and research for foreign students in numerical mathematics who are supported by foreign governments or exchange agencies (Fulbright, etc

  8. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  9. Evaluation of Space-Based Constraints on Global Nitrogen Oxide Emissions with Regional Aircraft Measurements over and Downwind of Eastern North America

    NASA Technical Reports Server (NTRS)

    Martin, Randall V.; Sioris, Christopher E.; Chance, Kelly; Ryerson, Thomas B.; Flocke, Frank M.; Bertram, Timothy H.; Wooldridge, Paul J.; Cohen, Ronald C.; Neuman, J. Andy; Swanson, Aaron

    2006-01-01

    We retrieve tropospheric nitrogen dioxide (NO 2) columns for May 2004 to April 2005 from the SCIAMACHY satellite instrument to derive top-down emissions of nitrogen oxides (NO(x) = NO + NO2) via inverse modeling with a global chemical transport model (GEOS-Chem). Simulated NO 2 vertical profiles used in the retrieval are evaluated with airborne measurements over and downwind of North America (ICARTT); a northern midlatitude lightning source of 1.6 Tg N/yr minimizes bias in the retrieval. Retrieved NO2 columns are validated (r2 = 0.60, slope = 0.82) with coincident airborne in situ measurements. The top-down emissions are combined with a priori information from a bottom-up emission inventory with error weighting to achieve an improved a posteriori estimate of the global distribution of surface NOx emissions. Our a posteriori NOx emission inventory for land surface NOx emissions (46.1 Tg N/yr) is 22% larger than the GEIA-based a priori bottom-up inventory for 1998, a difference that reflects rising anthropogenic emissions, especially from East Asia A posteriori NOx emissions for East Asia (9.8 Tg N/yr) exceed those from other continents. The a posteriori inventory improves the GEOS-Chem simulation of NOx, peroxyacetylnitrate, and nitric acid with respect to airborne in situ measurements over and downwind of New York City. The a posteriori is 7% larger than the EDGAR 3.2FT2000 global inventory, 3% larger than the NEI99 inventory for the United States, and 68% larger than a regional inventory for 2000 for eastern Asia. SCIAMACHY NO2 columns over the North Atlantic show a weak plume from lightning NO(x).

  10. Pointwise regularity of parameterized affine zipper fractal curves

    NASA Astrophysics Data System (ADS)

    Bárány, Balázs; Kiss, Gergely; Kolossváry, István

    2018-05-01

    We study the pointwise regularity of zipper fractal curves generated by affine mappings. Under the assumption of dominated splitting of index-1, we calculate the Hausdorff dimension of the level sets of the pointwise Hölder exponent for a subinterval of the spectrum. We give an equivalent characterization for the existence of regular pointwise Hölder exponent for Lebesgue almost every point. In this case, we extend the multifractal analysis to the full spectrum. In particular, we apply our results for de Rham’s curve.

  11. The output least-squares approach to estimating Lamé moduli

    NASA Astrophysics Data System (ADS)

    Gockenbach, Mark S.

    2007-12-01

    The Lamé moduli of a heterogeneous, isotropic, planar membrane can be estimated by observing the displacement of the membrane under a known edge traction, and choosing estimates of the moduli that best predict the observed displacement under a finite-element simulation. This algorithm converges to the exact moduli given pointwise measurements of the displacement on an increasingly fine mesh. The error estimates that prove this convergence also show the instability of the inverse problem.

  12. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  13. Assessment of Person Fit Using Resampling-Based Approaches

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…

  14. On Multi-Dimensional Unstructured Mesh Adaption

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    1999-01-01

    Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.

  15. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers

    PubMed Central

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-01-01

    AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586

  16. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  17. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    NASA Astrophysics Data System (ADS)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  18. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra.

    PubMed

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-28

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  19. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  20. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  1. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  2. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  3. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  4. Certified dual-corrected radiation patterns of phased antenna arrays by offline–online order reduction of finite-element models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, A., E-mail: a.sommer@lte.uni-saarland.de; Farle, O., E-mail: o.farle@lte.uni-saarland.de; Dyczij-Edlinger, R., E-mail: edlinger@lte.uni-saarland.de

    2015-10-15

    This paper presents a fast numerical method for computing certified far-field patterns of phased antenna arrays over broad frequency bands as well as wide ranges of steering and look angles. The proposed scheme combines finite-element analysis, dual-corrected model-order reduction, and empirical interpolation. To assure the reliability of the results, improved a posteriori error bounds for the radiated power and directive gain are derived. Both the reduced-order model and the error-bounds algorithm feature offline–online decomposition. A real-world example is provided to demonstrate the efficiency and accuracy of the suggested approach.

  5. A Ground Flash Fraction Retrieval Algorithm for GLM

    NASA Technical Reports Server (NTRS)

    Koshak, William J.

    2010-01-01

    A Bayesian inversion method is introduced for retrieving the fraction of ground flashes in a set of N lightning observed by a satellite lightning imager (such as the Geostationary Lightning Mapper, GLM). An exponential model is applied as a physically reasonable constraint to describe the measured lightning optical parameter distributions. Population statistics (i.e., the mean and variance) are invoked to add additional constraints to the retrieval process. The Maximum A Posteriori (MAP) solution is employed. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The approach is feasible for N greater than 2000, and retrieval errors decrease as N is increased.

  6. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2013-06-24

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Hoist, Submitted for publication . • Finite...1996. [20] C. LANCZOS, Linear Differential Operators, Dover Publications , Mineola, NY, 1997. [21] G.I. MARCHUK, Adjoint Equations and Analysis of...NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection of information is

  7. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  8. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  9. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers.

    PubMed

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-04-01

    Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in children. A population pharmacokinetic model was developed to describe both once and twice daily pharmacokinetic profiles of abacavir in infants and toddlers. Standard dosage regimen is associated with large interindividual variability in abacavir concentrations. A maximum a posteriori probability Bayesian estimator of AUC(0-) (t) based on three time points (0, 1 or 2, and 3 h) is proposed to support area under the concentration-time curve (AUC) targeted individualized therapy in infants and toddlers. To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration-time curve (AUC) targeted dosage and individualize therapy. The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation-estimation method. The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 () h−1 (RSE 6.3%), apparent central volume of distribution 4.94 () (RSE 28.7%), apparent peripheral volume of distribution 8.12 () (RSE14.2%), apparent intercompartment clearance 1.25 () h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC(0-) (t) was developed from the final model and can be used routinely to optimize individual dosing. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  10. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  11. Detection of Functional Change Using Cluster Trend Analysis in Glaucoma.

    PubMed

    Gardiner, Stuart K; Mansberger, Steven L; Demirel, Shaban

    2017-05-01

    Global analyses using mean deviation (MD) assess visual field progression, but can miss localized changes. Pointwise analyses are more sensitive to localized progression, but more variable so require confirmation. This study assessed whether cluster trend analysis, averaging information across subsets of locations, could improve progression detection. A total of 133 test-retest eyes were tested 7 to 10 times. Rates of change and P values were calculated for possible re-orderings of these series to generate global analysis ("MD worsening faster than x dB/y with P < y"), pointwise and cluster analyses ("n locations [or clusters] worsening faster than x dB/y with P < y") with specificity exactly 95%. These criteria were applied to 505 eyes tested over a mean of 10.5 years, to find how soon each detected "deterioration," and compared using survival models. This was repeated including two subsequent visual fields to determine whether "deterioration" was confirmed. The best global criterion detected deterioration in 25% of eyes in 5.0 years (95% confidence interval [CI], 4.7-5.3 years), compared with 4.8 years (95% CI, 4.2-5.1) for the best cluster analysis criterion, and 4.1 years (95% CI, 4.0-4.5) for the best pointwise criterion. However, for pointwise analysis, only 38% of these changes were confirmed, compared with 61% for clusters and 76% for MD. The time until 25% of eyes showed subsequently confirmed deterioration was 6.3 years (95% CI, 6.0-7.2) for global, 6.3 years (95% CI, 6.0-7.0) for pointwise, and 6.0 years (95% CI, 5.3-6.6) for cluster analyses. Although the specificity is still suboptimal, cluster trend analysis detects subsequently confirmed deterioration sooner than either global or pointwise analyses.

  12. High-degree Gravity Models from GRAIL Primary Mission Data

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Goossens, Sander J.; Sabaka, Terence J.; Nicholas, Joseph B.; Mazarico, Erwan; Rowlands, David D.; Loomis, Bryant D.; Chinn, Douglas S.; Caprette, Douglas S.; Neumann, Gregory A.; hide

    2013-01-01

    We have analyzed Ka?band range rate (KBRR) and Deep Space Network (DSN) data from the Gravity Recovery and Interior Laboratory (GRAIL) primary mission (1 March to 29 May 2012) to derive gravity models of the Moon to degree 420, 540, and 660 in spherical harmonics. For these models, GRGM420A, GRGM540A, and GRGM660PRIM, a Kaula constraint was applied only beyond degree 330. Variance?component estimation (VCE) was used to adjust the a priori weights and obtain a calibrated error covariance. The global root?mean?square error in the gravity anomalies computed from the error covariance to 320×320 is 0.77 mGal, compared to 29.0 mGal with the pre?GRAIL model derived with the SELENE mission data, SGM150J, only to 140×140. The global correlations with the Lunar Orbiter Laser Altimeter?derived topography are larger than 0.985 between l = 120 and 330. The free?air gravity anomalies, especially over the lunar farside, display a dramatic increase in detail compared to the pre?GRAIL models (SGM150J and LP150Q) and, through degree 320, are free of the orbit?track?related artifacts present in the earlier models. For GRAIL, we obtain an a posteriori fit to the S?band DSN data of 0.13 mm/s. The a posteriori fits to the KBRR data range from 0.08 to 1.5 micrometers/s for GRGM420A and from 0.03 to 0.06 micrometers/s for GRGM660PRIM. Using the GRAIL data, we obtain solutions for the degree 2 Love numbers, k20=0.024615+/-0.0000914, k21=0.023915+/-0.0000132, and k22=0.024852+/-0.0000167, and a preliminary solution for the k30 Love number of k30=0.00734+/-0.0015, where the Love number error sigmas are those obtained with VCE.

  13. A Posteriori Error Bounds for the Empirical Interpolation Method

    DTIC Science & Technology

    2010-03-18

    paramètres (x̄1, x̄2) ≡ µ ∈ DII ≡ [0.4, 0.6]2 et α = 0.1 fixé, les résultats sont similaires au cas d’un seul paramètre (Fig. 2). 1. Introduction...and denote the set of all distinct multi-indices β of dimension P of length I by MPI . The cardinality of MPI is given by card (MPI ) = ( P+I−1 I...operations, and we compute the interpolation errors ‖F (β)(·; τ) − F (β)M (·; τ)‖L∞(Ω), 0 < |β| < p − 1, for all τ ∈ Φ, in O(nΦMN ) ∑p−1 j=0 card (MPj

  14. Optimization of an on-board imaging system for extremely rapid radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He

    2015-11-15

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors aremore » proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration deformation fields was computed. Results: By most global metrics (e.g., mean, median, and maximum pointwise distance), the high-resolution detector had the best performance but the medium-resolution detector was comparable. For all medium- and high-resolution detector registrations, mean error between the realistic and gold standard deformation fields was less than 4 mm. By pointwise metrics (e.g., tracking a small lesion), the high- and medium-resolution detectors performed similarly. For these detectors, the smallest error between the realistic and gold standard registrations was 0.6 mm and the largest error was 3.6 mm. Conclusions: The medium-resolution CT detector was selected as the best for an extremely rapid radiation therapy system. In essentially all test cases, data from this detector produced a significantly better registration than data from the low-resolution detector and a comparable registration to data from the high-resolution detector. The medium-resolution detector provides an appropriate compromise between registration accuracy and system cost.« less

  15. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  16. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  17. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  18. Pointwise influence matrices for functional-response regression.

    PubMed

    Reiss, Philip T; Huang, Lei; Wu, Pei-Shien; Chen, Huaihou; Colcombe, Stan

    2017-12-01

    We extend the notion of an influence or hat matrix to regression with functional responses and scalar predictors. For responses depending linearly on a set of predictors, our definition is shown to reduce to the conventional influence matrix for linear models. The pointwise degrees of freedom, the trace of the pointwise influence matrix, are shown to have an adaptivity property that motivates a two-step bivariate smoother for modeling nonlinear dependence on a single predictor. This procedure adapts to varying complexity of the nonlinear model at different locations along the function, and thereby achieves better performance than competing tensor product smoothers in an analysis of the development of white matter microstructure in the brain. © 2017, The International Biometric Society.

  19. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  20. Geometry of warped product pointwise semi-slant submanifolds of cosymplectic manifolds and its applications

    NASA Astrophysics Data System (ADS)

    Ali, Akram; Ozel, Cenap

    It is known from [K. Yano and M. Kon, Structures on Manifolds (World Scientific, 1984)] that the integration of the Laplacian of a smooth function defined on a compact orientable Riemannian manifold without boundary vanishes with respect to the volume element. In this paper, we find out the some potential applications of this notion, and study the concept of warped product pointwise semi-slant submanifolds in cosymplectic manifolds as a generalization of contact CR-warped product submanifolds. Then, we prove the existence of warped product pointwise semi-slant submanifolds by their characterizations, and give an example supporting to this idea. Further, we obtain an interesting inequality in terms of the second fundamental form and the scalar curvature using Gauss equation and then, derive some applications of it with considering the equality case. We provide many trivial results for the warped product pointwise semi-slant submanifolds in cosymplectic space forms in various mathematical and physical terms such as Hessian, Hamiltonian and kinetic energy, and generalize the triviality results for contact CR-warped products as well.

  1. Local-Mesh, Local-Order, Adaptive Finite Element Methods with a Posteriori Error Estimators for Elliptic Partial Differential Equations.

    DTIC Science & Technology

    1981-12-01

    I I I I I o-F--o -- oIl lI I I 0--0------0I Im I I o--G--o ] II I I ...C-0076, the Department of Energy (DOE Grant DE-AC02-77ET53053), The National Science Foundation (Graduate Fellowship), and Yale University. " i o V.IM...element method, the choice of discretization i eft to the user, who must base his decision on experience with similar equations. - In recent years,

  2. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  3. Isogeometric Divergence-conforming B-splines for the Steady Navier-Stokes Equations

    DTIC Science & Technology

    2012-04-01

    discretizations produce pointwise divergence-free velocity elds and hence exactly satisfy mass conservation. Consequently, discrete variational formulations...cretizations produce pointwise divergence-free velocity fields and hence exactly satisfy mass conservation. Consequently, discrete variational ... variational formulation. Using a combination of an advective for- mulation, SUPG, PSPG, and grad-div stabilization, provably convergent numerical methods

  4. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  5. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    USDA-ARS?s Scientific Manuscript database

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with '100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  6. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    USDA-ARS?s Scientific Manuscript database

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with >100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  7. Visual Field Outcomes for the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT).

    PubMed

    Wall, Michael; Johnson, Chris A; Cello, Kimberly E; Zamba, K D; McDermott, Michael P; Keltner, John L

    2016-03-01

    The Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) showed that acetazolamide provided a modest, significant improvement in mean deviation (MD). Here, we further analyze visual field changes over the 6-month study period. Of 165 subjects with mild visual loss in the IIHTT, 125 had perimetry at baseline and 6 months. We evaluated pointwise linear regression of visual sensitivity versus time to classify test locations in the worst MD (study) eye as improving or not; pointwise changes from baseline to month 6 in decibels; and clinical consensus of change from baseline to 6 months. The average study eye had 36 of 52 test locations with improving sensitivity over 6 months using pointwise linear regression, but differences between the acetazolamide and placebo groups were not significant. Pointwise results mostly improved in both treatment groups with the magnitude of the mean change within groups greatest and statistically significant around the blind spot and the nasal area, especially in the acetazolamide group. The consensus classification of visual field change from baseline to 6 months in the study eye yielded percentages (acetazolamide, placebo) of 7.2% and 17.5% worse, 35.1% and 31.7% with no change, and 56.1% and 50.8% improved; group differences were not statistically significant. In the IIHTT, compared to the placebo group, the acetazolamide group had a significant pointwise improvement in visual field function, particularly in the nasal and pericecal areas; the latter is likely due to reduction in blind spot size related to improvement in papilledema. (ClinicalTrials.gov number, NCT01003639.).

  8. Integrated Mueller-matrix near-infrared imaging and point-wise spectroscopy improves colonic cancer detection

    PubMed Central

    Wang, Jianfeng; Zheng, Wei; Lin, Kan; Huang, Zhiwei

    2016-01-01

    We report the development and implementation of a unique integrated Mueller-matrix (MM) near-infrared (NIR) imaging and Mueller-matrix point-wise diffuse reflectance (DR) spectroscopy technique for improving colonic cancer detection and diagnosis. Point-wise MM DR spectra can be acquired from any suspicious tissue areas indicated by MM imaging. A total of 30 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated MM imaging and point-wise MM DR spectroscopy system. Polar decomposition algorithms are employed on the acquired images and spectra to derive three polarization metrics including depolarization, diattentuation and retardance for colonic tissue characterization. The decomposition results show that tissue depolarization and retardance are significantly decreased (p<0.001, paired 2-sided Student’s t-test, n = 30); while the tissue diattentuation is significantly increased (p<0.001, paired 2-sided Student’s t-test, n = 30) associated with colonic cancer. Further partial least squares discriminant analysis (PLS-DA) and leave-one tissue site-out, cross validation (LOSCV) show that the combination of the three polarization metrics provide the best diagnostic accuracy of 95.0% (sensitivity: 93.3%, and specificity: 96.7%) compared to either of the three polarization metrics (sensitivities of 93.3%, 83.3%, and 80.0%; and specificities of 90.0%, 96.7%, and 80.0%, respectively, for the depolarization, diattentuation and retardance metrics) for colonic cancer detection. This work suggests that the integrated MM NIR imaging and point-wise MM NIR diffuse reflectance spectroscopy has the potential to improve the early detection and diagnosis of malignant lesions in the colon. PMID:27446640

  9. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  10. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  11. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  12. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  13. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  14. Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model

    PubMed Central

    Seo, Dong Gi; Weiss, David J.

    2015-01-01

    Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848

  15. Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.

    2005-01-01

    For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.

  16. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  17. Visual field progression in glaucoma: estimating the overall significance of deterioration with permutation analyses of pointwise linear regression (PoPLR).

    PubMed

    O'Leary, Neil; Chauhan, Balwantray C; Artes, Paul H

    2012-10-01

    To establish a method for estimating the overall statistical significance of visual field deterioration from an individual patient's data, and to compare its performance to pointwise linear regression. The Truncated Product Method was used to calculate a statistic S that combines evidence of deterioration from individual test locations in the visual field. The overall statistical significance (P value) of visual field deterioration was inferred by comparing S with its permutation distribution, derived from repeated reordering of the visual field series. Permutation of pointwise linear regression (PoPLR) and pointwise linear regression were evaluated in data from patients with glaucoma (944 eyes, median mean deviation -2.9 dB, interquartile range: -6.3, -1.2 dB) followed for more than 4 years (median 10 examinations over 8 years). False-positive rates were estimated from randomly reordered series of this dataset, and hit rates (proportion of eyes with significant deterioration) were estimated from the original series. The false-positive rates of PoPLR were indistinguishable from the corresponding nominal significance levels and were independent of baseline visual field damage and length of follow-up. At P < 0.05, the hit rates of PoPLR were 12, 29, and 42%, at the fifth, eighth, and final examinations, respectively, and at matching specificities they were consistently higher than those of pointwise linear regression. In contrast to population-based progression analyses, PoPLR provides a continuous estimate of statistical significance for visual field deterioration individualized to a particular patient's data. This allows close control over specificity, essential for monitoring patients in clinical practice and in clinical trials.

  18. Topological Vulnerability Analysis

    NASA Astrophysics Data System (ADS)

    Jajodia, Sushil; Noel, Steven

    Traditionally, network administrators rely on labor-intensive processes for tracking network configurations and vulnerabilities. This requires a great deal of expertise, and is error prone because of the complexity of networks and associated security data. The interdependencies of network vulnerabilities make traditional point-wise vulnerability analysis inadequate. We describe a Topological Vulnerability Analysis (TVA) approach that analyzes vulnerability dependencies and shows all possible attack paths into a network. From models of the network vulnerabilities and potential attacker exploits, we compute attack graphs that convey the impact of individual and combined vulnerabilities on overall security. TVA finds potential paths of vulnerability through a network, showing exactly how attackers may penetrate a network. From this, we identify key vulnerabilities and provide strategies for protection of critical network assets.

  19. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  20. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  1. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  2. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  3. Theoretical Aspects of the Patterns Recognition Statistical Theory Used for Developing the Diagnosis Algorithms for Complicated Technical Systems

    NASA Astrophysics Data System (ADS)

    Obozov, A. A.; Serpik, I. N.; Mihalchenko, G. S.; Fedyaeva, G. A.

    2017-01-01

    In the article, the problem of application of the pattern recognition (a relatively young area of engineering cybernetics) for analysis of complicated technical systems is examined. It is shown that the application of a statistical approach for hard distinguishable situations could be the most effective. The different recognition algorithms are based on Bayes approach, which estimates posteriori probabilities of a certain event and an assumed error. Application of the statistical approach to pattern recognition is possible for solving the problem of technical diagnosis complicated systems and particularly big powered marine diesel engines.

  4. A weighted adjustment of a similarity transformation between two point sets containing errors

    NASA Astrophysics Data System (ADS)

    Marx, C.

    2017-10-01

    For an adjustment of a similarity transformation, it is often appropriate to consider that both the source and the target coordinates of the transformation are affected by errors. For the least squares adjustment of this problem, a direct solution is possible in the cases of specific-weighing schemas of the coordinates. Such a problem is considered in the present contribution and a direct solution is generally derived for the m-dimensional space. The applied weighing schema allows (fully populated) point-wise weight matrices for the source and target coordinates, both weight matrices have to be proportional to each other. Additionally, the solutions of two borderline cases of this weighting schema are derived, which only consider errors in the source or target coordinates. The investigated solution of the rotation matrix of the adjustment is independent of the scaling between the weight matrices of the source and the target coordinates. The mentioned borderline cases, therefore, have the same solution of the rotation matrix. The direct solution method is successfully tested on an example of a 3D similarity transformation using a comparison with an iterative solution based on the Gauß-Helmert model.

  5. The pointwise estimates of diffusion wave of the compressible micropolar fluids

    NASA Astrophysics Data System (ADS)

    Wu, Zhigang; Wang, Weike

    2018-09-01

    The pointwise estimates for the compressible micropolar fluids in dimension three are given, which exhibit generalized Huygens' principle for the fluid density and fluid momentum as the compressible Navier-Stokes equation, while the micro-rational momentum behaves like the fluid momentum of the Euler equation with damping. To circumvent the complexity from 7 × 7 Green's matrix, we use the decomposition of fluid part and electromagnetic part for the momentums to study three smaller Green's matrices. The following from this decomposition is that we have to deal with the new problem that the nonlinear terms contain nonlocal operators. We solve it by using the natural match of these new Green's functions and the nonlinear terms. Moreover, to derive the different pointwise estimates for different unknown variables such that the estimate of each unknown variable is in agreement with its Green's function, we develop some new estimates on the nonlinear interplay between different waves.

  6. A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems

    DOE PAGES

    Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; ...

    2016-10-27

    We study independent verification and quantification of fossil fuel (FF) emissions that constitutes a considerable scientific challenge. By coupling atmospheric observations of CO 2 with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO 2 concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlyingmore » processes affecting FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO 2 measurements over North America. Also, inversions are performed only for the month of January, as predominance of biospheric CO 2 signal relative to FF CO 2 signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m -2 s -1 between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.« less

  7. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  8. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  9. Inverse constraints for emission fluxes of atmospheric tracers estimated from concentration measurements and Lagrangian transport

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Patra, Prabir; Breivik, Knut

    2015-04-01

    Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.

  10. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  11. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  12. Effects of Time-Dependent Inflow Perturbations on Turbulent Flow in a Street Canyon

    NASA Astrophysics Data System (ADS)

    Duan, G.; Ngan, K.

    2017-12-01

    Urban flow and turbulence are driven by atmospheric flows with larger horizontal scales. Since building-resolving computational fluid dynamics models typically employ steady Dirichlet boundary conditions or forcing, the accuracy of numerical simulations may be limited by the neglect of perturbations. We investigate the sensitivity of flow within a unit-aspect-ratio street canyon to time-dependent perturbations near the inflow boundary. Using large-eddy simulation, time-periodic perturbations to the streamwise velocity component are incorporated via the nudging technique. Spatial averages of pointwise differences between unperturbed and perturbed velocity fields (i.e., the error kinetic energy) show a clear dependence on the perturbation period, though spatial structures are largely insensitive to the time-dependent forcing. The response of the error kinetic energy is maximized for perturbation periods comparable to the time scale of the mean canyon circulation. Frequency spectra indicate that this behaviour arises from a resonance between the inflow forcing and the mean motion around closed streamlines. The robustness of the results is confirmed using perturbations derived from measurements of roof-level wind speed.

  13. Verification Test of the SURF and SURFplus Models in xRage: Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2016-06-20

    The previous study used an underdriven detonation wave (steady ZND reaction zone profile followed by a scale invariant rarefaction wave) for PBX 9502 as a validation test of the implementation of the SURF and SURFplus models in the xRage code. Even with a fairly fine uniform mesh (12,800 cells for 100mm) the detonation wave profile had limited resolution due to the thin reaction zone width (0.18mm) for the fast SURF burn rate. Here we study the effect of finer resolution by comparing results of simulations with cell sizes of 8, 2 and 1 μm, which corresponds to 25, 100 andmore » 200 points within the reaction zone. With finer resolution the lead shock pressure is closer to the von Neumann spike pressure, and there is less noise in the rarefaction wave due to fluctuations within the reaction zone. As a result the average error decreases. The pointwise error is still dominated by the smearing the pressure kink in the vicinity of the sonic point which occurs at the end of the reaction zone.« less

  14. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function.

  15. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  16. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  17. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    NASA Astrophysics Data System (ADS)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  18. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  19. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    PubMed

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  20. Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Casertano, Stefano

    1991-01-01

    A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.

  1. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    DTIC Science & Technology

    2013-08-01

    both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3

  2. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  3. Sensing Slow Mobility and Interesting Locations for Lombardy Region (italy): a Case Study Using Pointwise Geolocated Open Data

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Oxoli, D.; Zurbarán, M. A.

    2016-06-01

    During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) - corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users' slow mobility preferences as well as places of interest at a regional scale.

  4. Influence of erroneous patient records on population pharmacokinetic modeling and individual bayesian estimation.

    PubMed

    van der Meer, Aize Franciscus; Touw, Daniël J; Marcus, Marco A E; Neef, Cornelis; Proost, Johannes H

    2012-10-01

    Observational data sets can be used for population pharmacokinetic (PK) modeling. However, these data sets are generally less precisely recorded than experimental data sets. This article aims to investigate the influence of erroneous records on population PK modeling and individual maximum a posteriori Bayesian (MAPB) estimation. A total of 1123 patient records of neonates who were administered vancomycin were used for population PK modeling by iterative 2-stage Bayesian (ITSB) analysis. Cut-off values for weighted residuals were tested for exclusion of records from the analysis. A simulation study was performed to assess the influence of erroneous records on population modeling and individual MAPB estimation. Also the cut-off values for weighted residuals were tested in the simulation study. Errors in registration have limited the influence on outcomes of population PK modeling but can have detrimental effects on individual MAPB estimation. A population PK model created from a data set with many registration errors has little influence on subsequent MAPB estimates for precisely recorded data. A weighted residual value of 2 for concentration measurements has good discriminative power for identification of erroneous records. ITSB analysis and its individual estimates are hardly affected by most registration errors. Large registration errors can be detected by weighted residuals of concentration.

  5. Multifractal surrogate-data generation algorithm that preserves pointwise Hölder regularity structure, with initial applications to turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2017-03-01

    An algorithm is described that can generate random variants of a time series while preserving the probability distribution of original values and the pointwise Hölder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the iterated amplitude adjusted wavelet transform can be used to generate bootstrapped versions of multifractal data, and because it preserves the pointwise Hölder regularity but not the local Hölder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.

  6. PSEUDO-CODEWORD LANDSCAPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHERTKOV, MICHAEL; STEPANOV, MIKHAIL

    2007-01-10

    The authors discuss performance of Low-Density-Parity-Check (LDPC) codes decoded by Linear Programming (LP) decoding at moderate and large Signal-to-Noise-Ratios (SNR). Frame-Error-Rate (FER) dependence on SNR and the noise space landscape of the coding/decoding scheme are analyzed by a combination of the previously introduced instanton/pseudo-codeword-search method and a new 'dendro' trick. To reduce complexity of the LP decoding for a code with high-degree checks, {ge} 5, they introduce its dendro-LDPC counterpart, that is the code performing identifically to the original one under Maximum-A-Posteriori (MAP) decoding but having reduced (down to three) check connectivity degree. Analyzing number of popular LDPC codes andmore » their dendro versions performing over the Additive-White-Gaussian-Noise (AWGN) channel, they observed two qualitatively different regimes: (i) error-floor sets early, at relatively low SNR, and (ii) FER decays with SNR increase faster at moderate SNR than at the largest SNR. They explain these regimes in terms of the pseudo-codeword spectra of the codes.« less

  7. Error Control Techniques for Satellite and Space Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1996-01-01

    In this report, we present the results of our recent work on turbo coding in two formats. Appendix A includes the overheads of a talk that has been given at four different locations over the last eight months. This presentation has received much favorable comment from the research community and has resulted in the full-length paper included as Appendix B, 'A Distance Spectrum Interpretation of Turbo Codes'. Turbo codes use a parallel concatenation of rate 1/2 convolutional encoders combined with iterative maximum a posteriori probability (MAP) decoding to achieve a bit error rate (BER) of 10(exp -5) at a signal-to-noise ratio (SNR) of only 0.7 dB. The channel capacity for a rate 1/2 code with binary phase shift-keyed modulation on the AWGN (additive white Gaussian noise) channel is 0 dB, and thus the Turbo coding scheme comes within 0.7 DB of capacity at a BER of 10(exp -5).

  8. Mean phase predictor for maximum a posteriori demodulator

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1996-01-01

    A system and method for optimal maximum a posteriori (MAP) demodulation using a novel mean phase predictor. The mean phase predictor conducts cumulative averaging over multiple blocks of phase samples to provide accurate prior mean phases, to be input into a MAP phase estimator.

  9. Suboptimal schemes for atmospheric data assimilation based on the Kalman filter

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Cohn, Stephen E.

    1994-01-01

    This work is directed toward approximating the evolution of forecast error covariances for data assimilation. The performance of different algorithms based on simplification of the standard Kalman filter (KF) is studied. These are suboptimal schemes (SOSs) when compared to the KF, which is optimal for linear problems with known statistics. The SOSs considered here are several versions of optimal interpolation (OI), a scheme for height error variance advection, and a simplified KF in which the full height error covariance is advected. To employ a methodology for exact comparison among these schemes, a linear environment is maintained, in which a beta-plane shallow-water model linearized about a constant zonal flow is chosen for the test-bed dynamics. The results show that constructing dynamically balanced forecast error covariances rather than using conventional geostrophically balanced ones is essential for successful performance of any SOS. A posteriori initialization of SOSs to compensate for model - data imbalance sometimes results in poor performance. Instead, properly constructed dynamically balanced forecast error covariances eliminate the need for initialization. When the SOSs studied here make use of dynamically balanced forecast error covariances, the difference among their performances progresses naturally from conventional OI to the KF. In fact, the results suggest that even modest enhancements of OI, such as including an approximate dynamical equation for height error variances while leaving height error correlation structure homogeneous, go a long way toward achieving the performance of the KF, provided that dynamically balanced cross-covariances are constructed and that model errors are accounted for properly. The results indicate that such enhancements are necessary if unconventional data are to have a positive impact.

  10. Assessing NARCCAP climate model effects using spatial confidence regions.

    PubMed

    French, Joshua P; McGinnis, Seth; Schwartzman, Armin

    2017-01-01

    We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.

  11. A revision of the gamma-evaluation concept for the comparison of dose distributions.

    PubMed

    Bakai, Annemarie; Alber, Markus; Nüsslin, Fridtjof

    2003-11-07

    A method for the quantitative four-dimensional (4D) evaluation of discrete dose data based on gradient-dependent local acceptance thresholds is presented. The method takes into account the local dose gradients of a reference distribution for critical appraisal of misalignment and collimation errors. These contribute to the maximum tolerable dose error at each evaluation point to which the local dose differences between comparison and reference data are compared. As shown, the presented concept is analogous to the gamma-concept of Low et al (1998a Med. Phys. 25 656-61) if extended to (3+1) dimensions. The pointwise dose comparisons of the reformulated concept are easier to perform and speed up the evaluation process considerably, especially for fine-grid evaluations of 3D dose distributions. The occurrences of false negative indications due to the discrete nature of the data are reduced with the method. The presented method was applied to film-measured, clinical data and compared with gamma-evaluations. 4D and 3D evaluations were performed. Comparisons prove that 4D evaluations have to be given priority, especially if complex treatment situations are verified, e.g., non-coplanar beam configurations.

  12. Application of the a posteriori granddaughter design to the Holstein genome

    USDA-ARS?s Scientific Manuscript database

    An a posteriori granddaughter design was applied to determine haplotype effects for the Holstein genome. A total of 52 grandsire families, each with >=100 genotyped sons with genetic evaluations based on progeny tests, were analyzed for 33 traits (milk, fat, and protein yields; fat and protein perce...

  13. Multiple-Event Seismic Location Using the Markov-Chain Monte Carlo Technique

    NASA Astrophysics Data System (ADS)

    Myers, S. C.; Johannesson, G.; Hanley, W.

    2005-12-01

    We develop a new multiple-event location algorithm (MCMCloc) that utilizes the Markov-Chain Monte Carlo (MCMC) method. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with observations and prior estimates of data and model uncertainties. Model parameters in MCMCloc consist of event hypocenters, and travel-time predictions. Data are arrival time measurements and phase assignments. Posteriori estimates of event locations, path corrections, pick errors, and phase assignments are made through analysis of the posteriori suite of acceptable solutions. Prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, the probability of misidentifying one phase for another, and the probability of spurious data. Inclusion of prior constraints on location accuracy allows direct utilization of ground-truth locations or well-constrained location parameters (e.g. from InSAR) that aid in the accuracy of the solution. Implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. Transition in behavior between a multiple-event locator for tightly clustered events and a single-event locator for solitary events is controlled by the spatial correlation of travel-time predictions. We test the MCMC locator on a regional data set of Nevada Test Site nuclear explosions. Event locations and origin times are known for these events, allowing us to test the features of MCMCloc using a high-quality ground truth data set. Preliminary tests suggest that MCMCloc provides excellent relative locations, often outperforming traditional multiple-event location algorithms, and excellent absolute locations are attained when constraints from one or more ground truth event are included. When phase assignments are switched, we find that MCMCloc properly corrects the error when predicted arrival times are separated by several seconds. In cases where the predicted arrival times are within the combined uncertainty of prediction and measurement errors, MCMCloc determines the probability of one or the other phase assignment and propagates this uncertainty into all model parameters. We find that MCMCloc is a promising method for simultaneously locating large, geographically distributed data sets. Because we incorporate prior knowledge on many parameters, MCMCloc is ideal for combining trusted data with data of unknown reliability. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-ABS-215048

  14. Detection of longitudinal visual field progression in glaucoma using machine learning.

    PubMed

    Yousefi, Siamak; Kiwaki, Taichi; Zheng, Yuhui; Suigara, Hiroki; Asaoka, Ryo; Murata, Hiroshi; Lemij, Hans; Yamanishi, Kenji

    2018-06-16

    Global indices of standard automated perimerty are insensitive to localized losses, while point-wise indices are sensitive but highly variable. Region-wise indices sit in between. This study introduces a machine-learning-based index for glaucoma progression detection that outperforms global, region-wise, and point-wise indices. Development and comparison of a prognostic index. Visual fields from 2085 eyes of 1214 subjects were used to identify glaucoma progression patterns using machine learning. Visual fields from 133 eyes of 71 glaucoma patients were collected 10 times over 10 weeks to provide a no-change, test-retest dataset. The parameters of all methods were identified using visual field sequences in the test-retest dataset to meet fixed 95% specificity. An independent dataset of 270 eyes of 136 glaucoma patients and survival analysis were utilized to compare methods. The time to detect progression in 25% of the eyes in the longitudinal dataset using global mean deviation (MD) was 5.2 years (95% confidence interval, 4.1 - 6.5 years); 4.5 years (4.0 - 5.5) using region-wise, 3.9 years (3.5 - 4.6) using point-wise, and 3.5 years (3.1 - 4.0) using machine learning analysis. The time until 25% of eyes showed subsequently confirmed progression after two additional visits were included were 6.6 years (5.6 - 7.4 years), 5.7 years (4.8 - 6.7), 5.6 years (4.7 - 6.5), and 5.1 years (4.5 - 6.0) for global, region-wise, point-wise, and machine learning analyses, respectively. Machine learning analysis detects progressing eyes earlier than other methods consistently, with or without confirmation visits. In particular, machine learning detects more slowly progressing eyes than other methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. A method for validation of finite element forming simulation on basis of a pointwise comparison of distance and curvature

    NASA Astrophysics Data System (ADS)

    Dörr, Dominik; Joppich, Tobias; Schirmaier, Fabian; Mosthaf, Tobias; Kärger, Luise; Henning, Frank

    2016-10-01

    Thermoforming of continuously fiber reinforced thermoplastics (CFRTP) is ideally suited to thin walled and complex shaped products. By means of forming simulation, an initial validation of the producibility of a specific geometry, an optimization of the forming process and the prediction of fiber-reorientation due to forming is possible. Nevertheless, applied methods need to be validated. Therefor a method is presented, which enables the calculation of error measures for the mismatch between simulation results and experimental tests, based on measurements with a conventional coordinate measuring device. As a quantitative measure, describing the curvature is provided, the presented method is also suitable for numerical or experimental sensitivity studies on wrinkling behavior. The applied methods for forming simulation, implemented in Abaqus explicit, are presented and applied to a generic geometry. The same geometry is tested experimentally and simulation and test results are compared by the proposed validation method.

  16. On the error in the nucleus-centered multipolar expansion of molecular electron density and its topology: A direct-space computational study.

    PubMed

    Michael, J Robert; Koritsanszky, Tibor

    2017-05-28

    The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.

  17. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  18. On the error in the nucleus-centered multipolar expansion of molecular electron density and its topology: A direct-space computational study

    NASA Astrophysics Data System (ADS)

    Michael, J. Robert; Koritsanszky, Tibor

    2017-05-01

    The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.

  19. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  20. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  1. Determination of quantitative trait variants by concordance via application of the a posteriori granddaughter design to the U.S. Holstein population

    USDA-ARS?s Scientific Manuscript database

    Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...

  2. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  3. Fractal stock markets: International evidence of dynamical (in)efficiency.

    PubMed

    Bianchi, Sergio; Frezza, Massimiliano

    2017-07-01

    The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 12. We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.

  4. Fractal stock markets: International evidence of dynamical (in)efficiency

    NASA Astrophysics Data System (ADS)

    Bianchi, Sergio; Frezza, Massimiliano

    2017-07-01

    The last systemic financial crisis has reawakened the debate on the efficient nature of financial markets, traditionally described as semimartingales. The standard approaches to endow the general notion of efficiency of an empirical content turned out to be somewhat inconclusive and misleading. We propose a topological-based approach to quantify the informational efficiency of a financial time series. The idea is to measure the efficiency by means of the pointwise regularity of a (stochastic) function, given that the signature of a martingale is that its pointwise regularity equals 1/2 . We provide estimates for real financial time series and investigate their (in)efficient behavior by comparing three main stock indexes.

  5. Assessing NARCCAP climate model effects using spatial confidence regions

    PubMed Central

    French, Joshua P.; McGinnis, Seth; Schwartzman, Armin

    2017-01-01

    We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference. PMID:28936474

  6. Asymptotic Time Decay in Quantum Physics: a Selective Review and Some New Results

    NASA Astrophysics Data System (ADS)

    Marchetti, Domingos H. U.; Wreszinski, Walter F.

    2013-05-01

    Decay of various quantities (return or survival probability, correlation functions) in time are the basis of a multitude of important and interesting phenomena in quantum physics, ranging from spectral properties, resonances, return and approach to equilibrium, to dynamical stability properties and irreversibility and the "arrow of time" in [Asymptotic Time Decay in Quantum Physics (World Scientific, 2013)]. In this review, we study several types of decay — decay in the average, decay in the Lp-sense, and pointwise decay — of the Fourier-Stieltjes transform of a measure, usually identified with the spectral measure, which appear naturally in different mathematical and physical settings. In particular, decay in the Lp-sense is related both to pointwise decay and to decay in the average and, from a physical standpoint, relates to a rigorous form of the time-energy uncertainty relation. Both decay on the average and in the Lp-sense are related to spectral properties, in particular, absolute continuity of the spectral measure. The study of pointwise decay for singular continuous measures (Rajchman measures) provides a bridge between ergodic theory, number theory and analysis, including the method of stationary phase. The theory is illustrated by some new results in the theory of sparse models.

  7. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  8. Converting point-wise nuclear cross sections to pole representation using regularized vector fitting

    NASA Astrophysics Data System (ADS)

    Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord

    2018-03-01

    Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.

  9. Towards efficient backward-in-time adjoint computations using data compression techniques

    DOE PAGES

    Cyr, E. C.; Shadid, J. N.; Wildey, T.

    2014-12-16

    In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less

  10. Neural network modeling and an uncertainty analysis in Bayesian framework: A case study from the KTB borehole site

    NASA Astrophysics Data System (ADS)

    Maiti, Saumen; Tiwari, Ram Krishna

    2010-10-01

    A new probabilistic approach based on the concept of Bayesian neural network (BNN) learning theory is proposed for decoding litho-facies boundaries from well-log data. We show that how a multi-layer-perceptron neural network model can be employed in Bayesian framework to classify changes in litho-log successions. The method is then applied to the German Continental Deep Drilling Program (KTB) well-log data for classification and uncertainty estimation in the litho-facies boundaries. In this framework, a posteriori distribution of network parameter is estimated via the principle of Bayesian probabilistic theory, and an objective function is minimized following the scaled conjugate gradient optimization scheme. For the model development, we inflict a suitable criterion, which provides probabilistic information by emulating different combinations of synthetic data. Uncertainty in the relationship between the data and the model space is appropriately taken care by assuming a Gaussian a priori distribution of networks parameters (e.g., synaptic weights and biases). Prior to applying the new method to the real KTB data, we tested the proposed method on synthetic examples to examine the sensitivity of neural network hyperparameters in prediction. Within this framework, we examine stability and efficiency of this new probabilistic approach using different kinds of synthetic data assorted with different level of correlated noise. Our data analysis suggests that the designed network topology based on the Bayesian paradigm is steady up to nearly 40% correlated noise; however, adding more noise (˜50% or more) degrades the results. We perform uncertainty analyses on training, validation, and test data sets with and devoid of intrinsic noise by making the Gaussian approximation of the a posteriori distribution about the peak model. We present a standard deviation error-map at the network output corresponding to the three types of the litho-facies present over the entire litho-section of the KTB. The comparisons of maximum a posteriori geological sections constructed here, based on the maximum a posteriori probability distribution, with the available geological information and the existing geophysical findings suggest that the BNN results reveal some additional finer details in the KTB borehole data at certain depths, which appears to be of some geological significance. We also demonstrate that the proposed BNN approach is superior to the conventional artificial neural network in terms of both avoiding "over-fitting" and aiding uncertainty estimation, which are vital for meaningful interpretation of geophysical records. Our analyses demonstrate that the BNN-based approach renders a robust means for the classification of complex changes in the litho-facies successions and thus could provide a useful guide for understanding the crustal inhomogeneity and the structural discontinuity in many other tectonically complex regions.

  11. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order accurate finite volume reconstruction technique. Consequently, if the number Ns is sufficiently large (Ns ≥ N + 1), the subscale resolution capability of the DG scheme is fully maintained, while preserving at the same time an essentially non-oscillatory behavior of the solution at discontinuities. Many standard DG limiters only adjust the discrete solution in troubled cells, based on the limiting of higher order moments or by applying a nonlinear WENO/HWENO reconstruction on the data at the new time t n + 1. Instead, our new DG limiter entirely recomputes the troubled cells by solving the governing PDE system again starting from valid data at the old time level tn, but using this time a more robust scheme on the sub-grid level. In other words, the piecewise polynomials produced by the new limiter are the result of a more robust solution of the PDE system itself, while most standard DG limiters are simply based on a mere nonlinear data post-processing of the discrete solution. Technically speaking, the new method corresponds to an element-wise checkpointing and restarting of the solver, using a lower order scheme on the sub-grid. As a result, the present DG limiter is even able to cure floating point errors like NaN values that have occurred after divisions by zero or after the computation of roots from negative numbers. This is a unique feature of our new algorithm among existing DG limiters. The new a posteriori sub-cell stabilization approach is developed within a high order accurate one-step ADER-DG framework on multidimensional unstructured meshes for hyperbolic systems of conservation laws as well as for hyperbolic PDE with non-conservative products. The method is applied to the Euler equations of compressible gas dynamics, to the ideal magneto-hydrodynamics equations (MHD) as well as to the seven-equation Baer-Nunziato model of compressible multi-phase flows. A large set of standard test problems is solved in order to assess the accuracy and robustness of the new limiter.

  12. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  13. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  14. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  15. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  16. SUBGR: A Program to Generate Subgroup Data for the Subgroup Resonance Self-Shielding Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog

    2016-06-06

    The Subgroup Data Generation (SUBGR) program generates subgroup data, including levels and weights from the resonance self-shielded cross section table as a function of background cross section. Depending on the nuclide and the energy range, these subgroup data can be generated by (a) narrow resonance approximation, (b) pointwise flux calculations for homogeneous media; and (c) pointwise flux calculations for heterogeneous lattice cells. The latter two options are performed by the AMPX module IRFFACTOR. These subgroup data are to be used in the Consortium for Advanced Simulation of Light Water Reactors (CASL) neutronic simulator MPACT, for which the primary resonance self-shieldingmore » method is the subgroup method.« less

  17. Asymptotic behavior for systems of nonlinear wave equations with multiple propagation speeds in three space dimensions

    NASA Astrophysics Data System (ADS)

    Katayama, Soichiro

    We consider the Cauchy problem for systems of nonlinear wave equations with multiple propagation speeds in three space dimensions. Under the null condition for such systems, the global existence of small amplitude solutions is known. In this paper, we will show that the global solution is asymptotically free in the energy sense, by obtaining the asymptotic pointwise behavior of the derivatives of the solution. Nonetheless we can also show that the pointwise behavior of the solution itself may be quite different from that of the free solution. In connection with the above results, a theorem is also developed to characterize asymptotically free solutions for wave equations in arbitrary space dimensions.

  18. Analysis of the stress field in a wedge using the fast expansions with pointwise determined coefficients

    NASA Astrophysics Data System (ADS)

    Chernyshov, A. D.; Goryainov, V. V.; Danshin, A. A.

    2018-03-01

    The stress problem for the elastic wedge-shaped cutter of finite dimensions with mixed boundary conditions is considered. The differential problem is reduced to the system of linear algebraic equations by applying twice the fast expansions with respect to the angular and radial coordinate. In order to determine the unknown coefficients of fast expansions, the pointwise method is utilized. The problem solution derived has explicit analytical form and it’s valid for the entire domain including its boundary. The computed profiles of the displacements and stresses in a cross-section of the cutter are provided. The stress field is investigated for various values of opening angle and cusp’s radius.

  19. Distributed mean curvature on a discrete manifold for Regge calculus

    NASA Astrophysics Data System (ADS)

    Conboye, Rory; Miller, Warner A.; Ray, Shannon

    2015-09-01

    The integrated mean curvature of a simplicial manifold is well understood in both Regge Calculus and Discrete Differential Geometry. However, a well motivated pointwise definition of curvature requires a careful choice of the volume over which to uniformly distribute the local integrated curvature. We show that hybrid cells formed using both the simplicial lattice and its circumcentric dual emerge as a remarkably natural structure for the distribution of this local integrated curvature. These hybrid cells form a complete tessellation of the simplicial manifold, contain a geometric orthonormal basis, and are also shown to give a pointwise mean curvature with a natural interpretation as the fractional rate of change of the normal vector.

  20. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  1. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  2. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  3. Ability of the current global observing network to constrain N2O sources and sinks

    NASA Astrophysics Data System (ADS)

    Millet, D. B.; Wells, K. C.; Chaliyakunnel, S.; Griffis, T. J.; Henze, D. K.; Bousserez, N.

    2014-12-01

    The global observing network for atmospheric N2O combines flask and in-situ measurements at ground stations with sustained and campaign-based aircraft observations. In this talk we apply a new global model of N2O (based on GEOS-Chem) and its adjoint to assess the strengths and weaknesses of this network for quantifying N2O emissions. We employ an ensemble of pseudo-observation analyses to evaluate the relative constraints provided by ground-based (surface, tall tower) and airborne (HIPPO, CARIBIC) observations, and the extent to which variability (e.g. associated with pulsing or seasonality of emissions) not captured by the a priori inventory can bias the inferred fluxes. We find that the ground-based and HIPPO datasets each provide a stronger constraint on the distribution of global emissions than does the CARIBIC dataset on its own. Given appropriate initial conditions, we find that our inferred surface fluxes are insensitive to model errors in the stratospheric loss rate of N2O over the timescale of our analysis (2 years); however, the same is not necessarily true for model errors in stratosphere-troposphere exchange. Finally, we examine the a posteriori error reduction distribution to identify priority locations for future N2O measurements.

  4. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  5. Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model

    NASA Astrophysics Data System (ADS)

    Vira, J.; Sofiev, M.

    2014-08-01

    This paper describes assimilation of trace gas observations into the chemistry transport model SILAM using the 3D-Var method. Assimilation results for year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the Airbase observation database, which provides the observational dataset used in this study. Attention is paid to the background and observation error covariance matrices, which are obtained primarily by iterative application of a posteriori diagnostics. The diagnostics are computed separately for two months representing summer and winter conditions, and further disaggregated by time of day. This allows deriving background and observation error covariance definitions which include both seasonal and diurnal variation. The consistency of the obtained covariance matrices is verified using χ2 diagnostics. The analysis scores are computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values is improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.

  6. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  7. Soft-output decoding algorithms in iterative decoding of turbo codes

    NASA Technical Reports Server (NTRS)

    Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.

    1996-01-01

    In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.

  8. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  9. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  10. Performance enhancement of wireless mobile adhoc networks through improved error correction and ICI cancellation

    NASA Astrophysics Data System (ADS)

    Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar

    2012-12-01

    Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.

  11. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  12. Automatic simplification of systems of reaction-diffusion equations by a posteriori analysis.

    PubMed

    Maybank, Philip J; Whiteley, Jonathan P

    2014-02-01

    Many mathematical models in biology and physiology are represented by systems of nonlinear differential equations. In recent years these models have become increasingly complex in order to explain the enormous volume of data now available. A key role of modellers is to determine which components of the model have the greatest effect on a given observed behaviour. An approach for automatically fulfilling this role, based on a posteriori analysis, has recently been developed for nonlinear initial value ordinary differential equations [J.P. Whiteley, Model reduction using a posteriori analysis, Math. Biosci. 225 (2010) 44-52]. In this paper we extend this model reduction technique for application to both steady-state and time-dependent nonlinear reaction-diffusion systems. Exemplar problems drawn from biology are used to demonstrate the applicability of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Hardware Implementation of Serially Concatenated PPM Decoder

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.

  14. New approach to estimating variability in visual field data using an image processing technique.

    PubMed Central

    Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P

    1995-01-01

    AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196

  15. Constraining East Asian CO2 emissions with GOSAT retrievals: methods and policy implications

    NASA Astrophysics Data System (ADS)

    Shim, C.; Henze, D. K.; Deng, F.

    2017-12-01

    The world largest CO2 emissions are from East Asia. However, there are large uncertainties in CO2 emission inventories, mainly because of imperfections in bottom-up statistics and a lack of observations for validating emission fluxes, particularly over China. Here we tried to constrain East Asian CO2 emissions with GOSAT retrievals applying 4-Dvar GEOS-Chem and its adjoint model. We applied the inversion to only the cold season (November - February) in 2009 - 2010 since the summer monsoon and greater transboundary impacts in spring and fall greatly reduced the GOSAT retrievals. In the cold season, the a posteriori CO2 emissions over East Asia generally higher by 5 - 20%, particularly Northeastern China shows intensively higher in a posteriori emissions ( 20%), where the Chinese government is recently focusing on mitigating the air pollutants. In another hand, a posteriori emissions from Southern China are lower 10 - 25%. A posteriori emissions in Korea and Japan are mostly higher by 10 % except over Kyushu region. With our top-down estimates with 4-Dvar CO2 inversion, we will evaluate the current regional CO2 emissions inventories and potential uncertainties in the sectoral emissions. This study will help understand the quantitative information on anthropogenic CO2 emissions over East Asia and will give policy implications for the mitigation targets.

  16. [Methods of a posteriori identification of food patterns in Brazilian children: a systematic review].

    PubMed

    Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2016-01-01

    The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.

  17. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  18. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  19. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  20. Structural Properties and Estimation of Delay Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H. S.

    1975-01-01

    Two areas in the theory of delay systems were studied: structural properties and their applications to feedback control, and optimal linear and nonlinear estimation. The concepts of controllability, stabilizability, observability, and detectability were investigated. The property of pointwise degeneracy of linear time-invariant delay systems is considered. Necessary and sufficient conditions for three dimensional linear systems to be made pointwise degenerate by delay feedback were obtained, while sufficient conditions for this to be possible are given for higher dimensional linear systems. These results were applied to obtain solvability conditions for the minimum time output zeroing control problem by delay feedback. A representation theorem is given for conditional moment functionals of general nonlinear stochastic delay systems, and stochastic differential equations are derived for conditional moment functionals satisfying certain smoothness properties.

  1. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  2. Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2018-07-01

    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.

  3. Magnetohydrodynamic cellular automata

    NASA Technical Reports Server (NTRS)

    Montgomery, David; Doolen, Gary D.

    1987-01-01

    A generalization of the hexagonal lattice gas model of Frisch, Hasslacher and Pomeau is shown to lead to two-dimensional magnetohydrodynamics. The method relies on the ideal point-wise conservation law for vector potential.

  4. A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

    2011-01-01

    In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  5. An analysis of the convergence of Newton iterations for solving elliptic Kepler's equation

    NASA Astrophysics Data System (ADS)

    Elipe, A.; Montijano, J. I.; Rández, L.; Calvo, M.

    2017-12-01

    In this note a study of the convergence properties of some starters E_0 = E_0(e,M) in the eccentricity-mean anomaly variables for solving the elliptic Kepler's equation (KE) by Newton's method is presented. By using a Wang Xinghua's theorem (Xinghua in Math Comput 68(225):169-186, 1999) on best possible error bounds in the solution of nonlinear equations by Newton's method, we obtain for each starter E_0(e,M) a set of values (e,M) \\in [0, 1) × [0, π ] that lead to the q-convergence in the sense that Newton's sequence (E_n)_{n ≥ 0} generated from E_0 = E_0(e,M) is well defined, converges to the exact solution E^* = E^*(e,M) of KE and further \\vert E_n - E^* \\vert ≤ q^{2^n -1} \\vert E_0 - E^* \\vert holds for all n ≥ 0. This study completes in some sense the results derived by Avendaño et al. (Celest Mech Dyn Astron 119:27-44, 2014) by using Smale's α -test with q=1/2. Also since in KE the convergence rate of Newton's method tends to zero as e → 0, we show that the error estimates given in the Wang Xinghua's theorem for KE can also be used to determine sets of q-convergence with q = e^k \\widetilde{q} for all e \\in [0,1) and a fixed \\widetilde{q} ≤ 1. Some remarks on the use of this theorem to derive a priori estimates of the error \\vert E_n - E^* \\vert after n Kepler's iterations are given. Finally, a posteriori bounds of this error that can be used to a dynamical estimation of the error are also obtained.

  6. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  7. Evaluation of Argos Telemetry Accuracy in the High-Arctic and Implications for the Estimation of Home-Range Size

    PubMed Central

    Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique

    2015-01-01

    Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245

  8. Efficient model reduction of parametrized systems by matrix discrete empirical interpolation

    NASA Astrophysics Data System (ADS)

    Negri, Federico; Manzoni, Andrea; Amsallem, David

    2015-12-01

    In this work, we apply a Matrix version of the so-called Discrete Empirical Interpolation (MDEIM) for the efficient reduction of nonaffine parametrized systems arising from the discretization of linear partial differential equations. Dealing with affinely parametrized operators is crucial in order to enhance the online solution of reduced-order models (ROMs). However, in many cases such an affine decomposition is not readily available, and must be recovered through (often) intrusive procedures, such as the empirical interpolation method (EIM) and its discrete variant DEIM. In this paper we show that MDEIM represents a very efficient approach to deal with complex physical and geometrical parametrizations in a non-intrusive, efficient and purely algebraic way. We propose different strategies to combine MDEIM with a state approximation resulting either from a reduced basis greedy approach or Proper Orthogonal Decomposition. A posteriori error estimates accounting for the MDEIM error are also developed in the case of parametrized elliptic and parabolic equations. Finally, the capability of MDEIM to generate accurate and efficient ROMs is demonstrated on the solution of two computationally-intensive classes of problems occurring in engineering contexts, namely PDE-constrained shape optimization and parametrized coupled problems.

  9. Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model

    NASA Astrophysics Data System (ADS)

    Vira, J.; Sofiev, M.

    2015-02-01

    This paper describes the assimilation of trace gas observations into the chemistry transport model SILAM (System for Integrated modeLling of Atmospheric coMposition) using the 3D-Var method. Assimilation results for the year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the AirBase observation database, which provides the observational data set used in this study. Attention was paid to the background and observation error covariance matrices, which were obtained primarily by the iterative application of a posteriori diagnostics. The diagnostics were computed separately for 2 months representing summer and winter conditions, and further disaggregated by time of day. This enabled the derivation of background and observation error covariance definitions, which included both seasonal and diurnal variation. The consistency of the obtained covariance matrices was verified using χ2 diagnostics. The analysis scores were computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values was improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.

  10. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  11. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  12. Approximation by the iterates of Bernstein operator

    NASA Astrophysics Data System (ADS)

    Zapryanova, Teodora; Tachev, Gancho

    2012-11-01

    We study the degree of pointwise approximation of the iterated Bernstein operators to its limiting operator. We obtain a quantitative estimates related to the conjecture of Gonska and Raşa from 2006.

  13. Velocity-intermittency structure for wake flow of the pitched single wind turbine under different inflow conditions

    NASA Astrophysics Data System (ADS)

    Crist, Ryan; Cal, Raul Bayoan; Ali, Naseem; Rockel, Stanislav; Peinke, Joachim; Hoelling, Michael

    2017-11-01

    The velocity-intermittency quadrant method is used to characterize the flow structure of the wake flow in the boundary layer of a wind turbine array. Multifractal framework presents the intermittency as a pointwise Hölder exponent. A 3×3 wind turbine array tested experimentally provided a velocity signal at a 21×9 downstream location, measured via hot-wire anemometry. The results show a negative correlation between the velocity and the intermittency at the hub height and bottom tip, whereas the top tip regions show a positive correlation. Sweep and ejection based on the velocity and intermittency are dominant downstream from the rotor. The pointwise results reflect large-scale organization of the flow and velocity-intermittency events corresponding to a foreshortened recirculation region near the hub height and the bottom tip.

  14. Soot Volume Fraction Imaging

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S.; Ku, Jerry C.

    1994-01-01

    A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.

  15. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  16. Iterative universal state selective correction for the Brillouin-Wigner multireference coupled-cluster theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banik, Subrata; Ravichandran, Lalitha; Brabec, Jiri

    2015-03-21

    As a further development of the previously introduced a posteriori Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] and [Brabec et al., J. Chem. Phys., 136, 124102 (2012)], we suggest an iterative form of the USS correction by means of correcting effective Hamiltonian matrix elements. We also formulate USS corrections via the left Bloch equations. The convergence of the USS corrections with excitation level towards the FCI limit is also investigated. Various forms of the USS and simplified diagonal USSD corrections at the SD and SD(T) levels are numerically assessed on several model systems and onmore » the ozone and tetramethyleneethane molecules. It is shown that the iterative USS correction can successfully replace the previously developed a posteriori BWCC size-extensivity correction, while it is not sensitive to intruder states and performs well also in other cases when the a posteriori one fails, like e.g. for the asymmetric vibration mode of ozone.« less

  17. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    NASA Astrophysics Data System (ADS)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  18. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  19. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  20. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    NASA Astrophysics Data System (ADS)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  1. A Method for Retrieving Ground Flash Fraction from Satellite Lightning Imager Data

    NASA Technical Reports Server (NTRS)

    Koshak, William J.

    2009-01-01

    A general theory for retrieving the fraction of ground flashes in N lightning observed by a satellite-based lightning imager is provided. An "exponential model" is applied as a physically reasonable constraint to describe the measured optical parameter distributions, and population statistics (i.e., mean, variance) are invoked to add additional constraints to the retrieval process. The retrieval itself is expressed in terms of a Bayesian inference, and the Maximum A Posteriori (MAP) solution is obtained. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The ability to retrieve ground flash fraction has important benefits to the atmospheric chemistry community. For example, using the method to partition the existing satellite global lightning climatology into separate ground and cloud flash climatologies will improve estimates of lightning nitrogen oxides (NOx) production; this in turn will improve both regional air quality and global chemistry/climate model predictions.

  2. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Chen, Huangxin; Salama, Amgad; Sun, Shuyu

    2016-10-01

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  3. A Bayesian approach to tracking patients having changing pharmacokinetic parameters

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Jelliffe, Roger W.

    2004-01-01

    This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.

  4. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    NASA Astrophysics Data System (ADS)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  5. Pointwise probability reinforcements for robust statistical inference.

    PubMed

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Measurement of entropy generation within bypass transitional flow

    NASA Astrophysics Data System (ADS)

    Skifton, Richard; Budwig, Ralph; McEligot, Donald; Crepeau, John

    2012-11-01

    A flat plate made from quartz was submersed in the Idaho National Laboratory's Matched Index of Refraction (MIR) flow facility. PIV was utilized to capture spatial vectors maps at near wall locations with five to ten points within the viscous sublayer. Entropy generation was calculated directly from measured velocity fluctuation derivatives. Two flows were studied: a zero pressure gradient and an adverse pressure gradient (β = -0.039). The free stream turbulence intensity to drive bypass transition ranged between 3% (near trailing edge) and 8% (near leading edge). The pointwise entropy generation rate will be utilized as a design parameter to systematically reduce losses. As a second observation, the pointwise entropy can be shown to predict the onset of transitional flow. This research was partially supported by the DOE EPSCOR program, grant DE-SC0004751 and by the Idaho National Laboratory. Center for Advanced Energy Studies.

  7. Pion distribution amplitude from lattice QCD.

    PubMed

    Cloët, I C; Chang, L; Roberts, C D; Schmidt, S M; Tandy, P C

    2013-08-30

    A method is explained through which a pointwise accurate approximation to the pion's valence-quark distribution amplitude (PDA) may be obtained from a limited number of moments. In connection with the single nontrivial moment accessible in contemporary simulations of lattice-regularized QCD, the method yields a PDA that is a broad concave function whose pointwise form agrees with that predicted by Dyson-Schwinger equation analyses of the pion. Under leading-order evolution, the PDA remains broad to energy scales in excess of 100 GeV, a feature which signals persistence of the influence of dynamical chiral symmetry breaking. Consequently, the asymptotic distribution φπ(asy)(x) is a poor approximation to the pion's PDA at all such scales that are either currently accessible or foreseeable in experiments on pion elastic and transition form factors. Thus, related expectations based on φ φπ(asy)(x) should be revised.

  8. A method to account for the temperature sensitivity of TCCON total column measurements

    NASA Astrophysics Data System (ADS)

    Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.

    2014-05-01

    The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References: Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P. O.: The Total Carbon Column Observing Network, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369, 2087-2112, 2011.

  9. Resistive sensitivity functions for van der Pauw astroid and rounded crosses and cloverleafs

    NASA Astrophysics Data System (ADS)

    Koon, Daniel; Hansen, Ole

    2014-03-01

    We have calculated the sensitivity of van der Pauw resistances to local resistive variations for circular, square and astroid discs of infinitesimal thickness, as well as for the families of rounded crosses and cloverleafs, as a function of specimen parameters, using the direct formulas of our recent paper (Koon et al. 2013 J. Appl. Phys.114 163710) applied to ``reciprocally dual geometries'' (swapped Dirichlet and Neumann boundary conditions) described by Mareš et al.(2012 Meas. Sci. Technol. 23 045004). These results show that (a) the product of any such sensitivity function times differential area, and thus (b) the ratio of any two sensitivities, is invariant under conformal mapping, allowing for the pointwise determination of the conformal mapping function. The family of rounded crosses, which is bounded in parameter space by the square, the astroid and an ``infinitesimally thin'' cross, seems to represent the best geometry for focusing transport measurements on the center of the specimen while minimizing errors due to edge- or contact-effects. Made possible by an SLU Faculty research grant.

  10. Combining experimental and simulation data of molecular processes via augmented Markov models.

    PubMed

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  11. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  12. Performing a preliminary hazard analysis applied to administration of injectable drugs to infants.

    PubMed

    Hfaiedh, Nadia; Kabiche, Sofiane; Delescluse, Catherine; Balde, Issa-Bella; Merlin, Sophie; Carret, Sandra; de Pontual, Loïc; Fontan, Jean-Eudes; Schlatter, Joël

    2017-08-01

    Errors in hospitals during the preparation and administration of intravenous drugs to infants and children have been reported to a rate of 13% to 84%. This study aimed to investigate the potential for hazardous events that may lead to an accident for preparation and administration of drug injection in a pediatric department and to describe a reduction plan of risks. The preliminary hazard analysis (PHA) method was implemented by a multidisciplinary working group over a period of 5 months (April-August 2014) in infants aged from 28 days to 2 years. The group identified required hazard controls and follow-up actions to reduce the error risk. To analyze the results, the STATCART APR software was used. During the analysis, 34 hazardous situations were identified, among 17 were quoted very critical and drawn 69 risk scenarios. After follow-up actions, the scenarios with unacceptable risk declined from 17.4% to 0%, and these with acceptable under control from 46.4% to 43.5%. The PHA can be used as an aid in the prioritization of corrective actions and the implementation of control measures to reduce risk. The PHA is a complement of the a posteriori risk management already exists. © 2017 John Wiley & Sons, Ltd.

  13. Practical Aspects of Stabilized FEM Discretizations of Nonlinear Conservation Law Systems with Convex Extension

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Saini, Subhash (Technical Monitor)

    1999-01-01

    This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.

  14. A Critical Reassessment of the Role of Mitochondria in Tumorigenesis

    PubMed Central

    Salas, Antonio; Yao, Yong-Gang; Macaulay, Vincent; Vega, Ana; Carracedo, Ángel; Bandelt, Hans-Jürgen

    2005-01-01

    Background Mitochondrial DNA (mtDNA) is being analyzed by an increasing number of laboratories in order to investigate its potential role as an active marker of tumorigenesis in various types of cancer. Here we question the conclusions drawn in most of these investigations, especially those published in high-rank cancer research journals, under the evidence that a significant number of these medical mtDNA studies are based on obviously flawed sequencing results. Methods and Findings In our analyses, we take a phylogenetic approach and employ thorough database searches, which together have proven successful for detecting erroneous sequences in the fields of human population genetics and forensics. Apart from conceptual problems concerning the interpretation of mtDNA variation in tumorigenesis, in most cases, blocks of seemingly somatic mutations clearly point to contamination or sample mix-up and, therefore, have nothing to do with tumorigenesis. Conclusion The role of mitochondria in tumorigenesis remains unclarified. Our findings of laboratory errors in many contributions would represent only the tip of the iceberg since most published studies do not provide the raw sequence data for inspection, thus hindering a posteriori evaluation of the results. There is no precedent for such a concatenation of errors and misconceptions affecting a whole subfield of medical research. PMID:16187796

  15. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization

    PubMed Central

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-01-01

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors. PMID:28394268

  16. Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization.

    PubMed

    Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros

    2017-04-10

    Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K -Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.

  17. Test–Retest Reproducibility of the Microperimeter MP3 With Fundus Image Tracking in Healthy Subjects and Patients With Macular Disease

    PubMed Central

    Palkovits, Stefan; Hirnschall, Nino; Georgiev, Stefan; Leisser, Christoph

    2018-01-01

    Purpose To evaluate the test–retest reproducibility of a novel microperimeter with fundus image tracking (MP3, Nidek Co, Japan) in healthy subjects and patients with macular disease. Methods Ten healthy subjects and 20 patients suffering from range of macular diseases were included. After training measurements, two additional microperimetry measurements were scheduled. Test–retest reproducibility was assessed for mean retinal sensitivity, pointwise sensitivity, and deep scotoma size using the coefficient of repeatability and Bland-Altman diagrams. In addition, in a subgroup of patients microperimetry was compared with conventional perimetry. Results Average differences in mean retinal sensitivity between the two study measurements were 0.26 ± 1.7 dB (median 0 dB; interquartile range [IQR] −1 to 1) for the healthy and 0.36 ± 2.5 dB (median 0 dB; IQR −1 to 2) for the macular patient group. Coefficients of repeatability for mean retinal sensitivity and pointwise retinal sensitivity were 1.2 and 3.3 dB for the healthy subjects and 1.6 and 5.0 dB for the macular disease patients, respectively. Absolute agreement in deep scotoma size between both study days was found in 79.9% of the test loci. Conclusion The microperimeter MP3 shows an adequate test–retest reproducibility for mean retinal sensitivity, pointwise retinal sensitivity, and deep scotoma size in healthy subjects and patients suffering from macular disease. Furthermore, reproducibility of microperimetry is higher than conventional perimetry. Translational Relevance Reproducibility is an important measure for each diagnostic device. Especially in a clinical setting high reproducibility set the basis to achieve reliable results using the specific device. Therefore, assessment of the reproducibility is of eminent importance to interpret the findings of future studies. PMID:29430338

  18. Test-Retest Reproducibility of the Microperimeter MP3 With Fundus Image Tracking in Healthy Subjects and Patients With Macular Disease.

    PubMed

    Palkovits, Stefan; Hirnschall, Nino; Georgiev, Stefan; Leisser, Christoph; Findl, Oliver

    2018-02-01

    To evaluate the test-retest reproducibility of a novel microperimeter with fundus image tracking (MP3, Nidek Co, Japan) in healthy subjects and patients with macular disease. Ten healthy subjects and 20 patients suffering from range of macular diseases were included. After training measurements, two additional microperimetry measurements were scheduled. Test-retest reproducibility was assessed for mean retinal sensitivity, pointwise sensitivity, and deep scotoma size using the coefficient of repeatability and Bland-Altman diagrams. In addition, in a subgroup of patients microperimetry was compared with conventional perimetry. Average differences in mean retinal sensitivity between the two study measurements were 0.26 ± 1.7 dB (median 0 dB; interquartile range [IQR] -1 to 1) for the healthy and 0.36 ± 2.5 dB (median 0 dB; IQR -1 to 2) for the macular patient group. Coefficients of repeatability for mean retinal sensitivity and pointwise retinal sensitivity were 1.2 and 3.3 dB for the healthy subjects and 1.6 and 5.0 dB for the macular disease patients, respectively. Absolute agreement in deep scotoma size between both study days was found in 79.9% of the test loci. The microperimeter MP3 shows an adequate test-retest reproducibility for mean retinal sensitivity, pointwise retinal sensitivity, and deep scotoma size in healthy subjects and patients suffering from macular disease. Furthermore, reproducibility of microperimetry is higher than conventional perimetry. Reproducibility is an important measure for each diagnostic device. Especially in a clinical setting high reproducibility set the basis to achieve reliable results using the specific device. Therefore, assessment of the reproducibility is of eminent importance to interpret the findings of future studies.

  19. Fast function-on-scalar regression with penalized basis expansions.

    PubMed

    Reiss, Philip T; Huang, Lei; Mennes, Maarten

    2010-01-01

    Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.

  20. Generalized local emission tomography

    DOEpatents

    Katsevich, Alexander J.

    1998-01-01

    Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.

  1. Dissipative structure and global existence in critical space for Timoshenko system of memory type

    NASA Astrophysics Data System (ADS)

    Mori, Naofumi

    2018-08-01

    In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.

  2. Galerkin methods for Boltzmann-Poisson transport with reflection conditions on rough boundaries

    NASA Astrophysics Data System (ADS)

    Morales Escalante, José A.; Gamba, Irene M.

    2018-06-01

    We consider in this paper the mathematical and numerical modeling of reflective boundary conditions (BC) associated to Boltzmann-Poisson systems, including diffusive reflection in addition to specularity, in the context of electron transport in semiconductor device modeling at nano scales, and their implementation in Discontinuous Galerkin (DG) schemes. We study these BC on the physical boundaries of the device and develop a numerical approximation to model an insulating boundary condition, or equivalently, a pointwise zero flux mathematical condition for the electron transport equation. Such condition balances the incident and reflective momentum flux at the microscopic level, pointwise at the boundary, in the case of a more general mixed reflection with momentum dependant specularity probability p (k →). We compare the computational prediction of physical observables given by the numerical implementation of these different reflection conditions in our DG scheme for BP models, and observe that the diffusive condition influences the kinetic moments over the whole domain in position space.

  3. Point-wise and whole-field laser speckle intensity fluctuation measurements applied to botanical specimens

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Wang, Junlan; Wu, Xiaoping; Williams, Fred W.; Schmidt, Richard J.

    1997-12-01

    Based on multi-scattering speckle theory, the speckle fields generated by plant specimens irradiated by laser light have been studied using a pointwise method. In addition, a whole-field method has been developed with which entire botanical specimens may be studied. Results are reported from measurements made on tomato and apple fruits, orange peel, leaves of tobacco seedlings, leaves of shihu seedlings (a Chinese medicinal herb), soy-bean sprouts, and leaves from an unidentified trailing houseplant. Although differences where observed in the temporal fluctuations of speckles that could be ascribed to differences in age and vitality, the growing tip of the bean sprout and the shihu seedling both generated virtually stationary speckles such as were observed from boiled orange peel and from localised heat-damaged regions on apple fruit. Our results suggest that both the identity of the botanical specimen and the site at which measurements are taken are likely to critically affect the observation or otherwise of temporal fluctuations of laser speckles.

  4. Exact simulation of max-stable processes.

    PubMed

    Dombry, Clément; Engelke, Sebastian; Oesting, Marco

    2016-06-01

    Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.

  5. Artificial organisms as tools for the development of psychological theory: Tolman's lesson.

    PubMed

    Miglino, Orazio; Gigliotta, Onofrio; Cardaci, Maurizio; Ponticorvo, Michela

    2007-12-01

    In the 1930s and 1940s, Edward Tolman developed a psychological theory of spatial orientation in rats and humans. He expressed his theory as an automaton (the "schematic sowbug") or what today we would call an "artificial organism." With the technology of the day, he could not implement his model. Nonetheless, he used it to develop empirical predictions which tested with animals in the laboratory. This way of proceeding was in line with scientific practice dating back to Galileo. The way psychologists use artificial organisms in their work today breaks with this tradition. Modern "artificial organisms" are constructed a posteriori, working from experimental or ethological observations. As a result, researchers can use them to confirm a theoretical model or to simulate its operation. But they make no contribution to the actual building of models. In this paper, we try to return to Tolman's original strategy: implementing his theory of "vicarious trial and error" in a simulated robot, forecasting the robot's behavior and conducting experiments that verify or falsify these predictions.

  6. Laser beam complex amplitude measurement by phase diversity.

    PubMed

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  7. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  8. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  9. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  10. Rotor systems research aircraft risk-reduction shake test

    NASA Technical Reports Server (NTRS)

    Wellman, J. Brent

    1990-01-01

    A shake test and an extensive analysis of results were performed to evaluate the possibility of and the method for dynamically calibrating the Rotor Systems Research Aircraft (RSRA). The RSRA airframe was subjected to known vibratory loads in several degrees of freedom and the responses of many aircraft transducers were recorded. Analysis of the transducer responses using the technique of dynamic force determination showed that the RSRA, when used as a dynamic measurement system, could predict, a posteriori, an excitation force in a single axis to an accuracy of about 5 percent and sometimes better. As the analysis was broadened to include multiple degrees of freedom for the excitation force, the predictive ability of the measurement system degraded to about 20 percent, with the error occasionally reaching 100 percent. The poor performance of the measurement system is explained by the nonlinear response of the RSRA to vibratory forces and the inadequacy of the particular method used in accounting for this nonlinearity.

  11. Dipole excitation of surface plasmon on a conducting sheet: Finite element approximation and validation

    NASA Astrophysics Data System (ADS)

    Maier, Matthias; Margetis, Dionisios; Luskin, Mitchell

    2017-06-01

    We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon-polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.

  12. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  13. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  14. THE ARS-MISSOURI SOIL STRENGTH PROFILE SENSOR: CURRENT STATUS AND FUTURE PROSPECTS

    USDA-ARS?s Scientific Manuscript database

    Soil compaction that is induced by tillage and traction is an ongoing concern in crop production, and also has environmental consequences. Although cone penetrometers provide standardized compaction measurements, the pointwise data collected makes it difficult to obtain enough data to represent with...

  15. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  16. Analysis of trend changes in Northern African palaeo-climate by using Bayesian inference

    NASA Astrophysics Data System (ADS)

    Schütz, Nadine; Trauth, Martin H.; Holschneider, Matthias

    2010-05-01

    Climate variability of Northern Africa is of high interest due to climate-evolutionary linkages under study. The reconstruction of the palaeo-climate over long time scales, including the expected linkages (> 3 Ma), is mainly accessible by proxy data from deep sea drilling cores. By concentrating on published data sets, we try to decipher rhythms and trends to detect correlations between different proxy time series by advanced mathematical methods. Our preliminary data is dust concentration, as an indicator for climatic changes such as humidity, from the ODP sites 659, 721 and 967 situated around Northern Africa. Our interest is in challenging the available time series with advanced statistical methods to detect significant trend changes and to compare different model assumptions. For that purpose, we want to avoid the rescaling of the time axis to obtain equidistant time steps for filtering methods. Additionally we demand an plausible description of the errors for the estimated parameters, in terms of confidence intervals. Finally, depending on what model we restrict on, we also want an insight in the parameter structure of the assumed models. To gain this information, we focus on Bayesian inference by formulating the problem as a linear mixed model, so that the expectation and deviation are of linear structure. By using the Bayesian method we can formulate the posteriori density as a function of the model parameters and calculate this probability density in the parameter space. Depending which parameters are of interest, we analytically and numerically marginalize the posteriori with respect to the remaining parameters of less interest. We apply a simple linear mixed model to calculate the posteriori densities of the ODP sites 659 and 721 concerning the last 5 Ma at maximum. From preliminary calculations on these data sets, we can confirm results gained by the method of breakfit regression combined with block bootstrapping ([1]). We obtain a significant change point around (1.63 - 1.82) Ma, which correlates with a global climate transition due to the establishment of the Walker circulation ([2]). Furthermore we detect another significant change point around (2.7 - 3.2) Ma, which correlates with the end of the Pliocene warm period (permanent El Niño-like conditions) and the onset of a colder global climate ([3], [4]). The discussion on the algorithm, the results of calculated confidence intervals, the available information about the applied model in the parameter space and the comparison of multiple change point models will be presented. [1] Trauth, M.H., et al., Quaternary Science Reviews, 28, 2009 [2] Wara, M.W., et al., Science, Vol. 309, 2005 [3] Chiang, J.C.H., Annual Review of Earth and Planetary Sciences, Vol. 37, 2009 [4] deMenocal, P., Earth and Planetary Science Letters, 220, 2004

  17. Space-time adaptive ADER-DG schemes for dissipative flows: Compressible Navier-Stokes and resistive MHD equations

    NASA Astrophysics Data System (ADS)

    Fambri, Francesco; Dumbser, Michael; Zanotti, Olindo

    2017-11-01

    This paper presents an arbitrary high-order accurate ADER Discontinuous Galerkin (DG) method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent partial differential equations for compressible dissipative flows : the compressible Navier-Stokes equations and the equations of viscous and resistive magnetohydrodynamics in two and three space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell finite volume limiting procedure suitable for discontinuous Galerkin methods (Dumbser et al., 2014, Zanotti et al., 2015 [40,41]). It is a well known fact that a major weakness of high order DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called 'Gibbs phenomenon'. In the present work, a nonlinear stabilization of the scheme is sequentially and locally introduced only for troubled cells on the basis of a novel a posteriori detection criterion, i.e. the MOOD approach. The main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. In practice the method first produces a so-called candidate solution by using a high order accurate unlimited DG scheme. Then, a set of numerical and physical detection criteria is applied to the candidate solution, namely: positivity of pressure and density, absence of floating point errors and satisfaction of a discrete maximum principle in the sense of polynomials. Furthermore, in those cells where at least one of these criteria is violated the computed candidate solution is detected as troubled and is locally rejected. Subsequently, a more reliable numerical solution is recomputed a posteriori by employing a more robust but still very accurate ADER-WENO finite volume scheme on the subgrid averages within that troubled cell. Finally, a high order DG polynomial is reconstructed back from the evolved subcell averages. We apply the whole approach for the first time to the equations of compressible gas dynamics and magnetohydrodynamics in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme standout against a wide number of non-trivial test cases both for the compressible Navier-Stokes and the viscous and resistive magnetohydrodynamics equations. The present results show clearly that the shock-capturing capability of the news schemes is significantly enhanced within a cell-by-cell Adaptive Mesh Refinement (AMR) implementation together with time accurate local time stepping (LTS).

  18. Incipient singularities in the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Siggia, E. D.; Pumir, A.

    1985-01-01

    Infinite pointwise stretching in a finite time for general initial conditions is found in a simulation of the Biot-Savart equation for a slender vortex tube in three dimensions. Viscosity is ineffective in limiting the divergence in the vorticity as long as it remains concentrated in tubes. Stability has not been shown.

  19. Returns to Scale and Economies of Scale: Further Observations.

    ERIC Educational Resources Information Center

    Gelles, Gregory M.; Mitchell, Douglas W.

    1996-01-01

    Maintains that most economics textbooks continue to repeat past mistakes concerning returns to scale and economies of scale under assumptions of constant and nonconstant input prices. Provides an adaptation for a calculus-based intermediate microeconomics class that demonstrates the pointwise relationship between returns to scale and economies of…

  20. Solution of a Nonlinear Heat Conduction Equation for a Curvilinear Region with Dirichlet Conditions by the Fast-Expansion Method

    NASA Astrophysics Data System (ADS)

    Chernyshov, A. D.

    2018-05-01

    The analytical solution of the nonlinear heat conduction problem for a curvilinear region is obtained with the use of the fast-expansion method together with the method of extension of boundaries and pointwise technique of computing Fourier coefficients.

  1. A MATLAB-based graphical user interface program for computing functionals of the geopotential up to ultra-high degrees and orders

    NASA Astrophysics Data System (ADS)

    Bucha, Blažej; Janák, Juraj

    2013-07-01

    We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.

  2. Generation of real-time mode high-resolution water vapor fields from GPS observations

    NASA Astrophysics Data System (ADS)

    Yu, Chen; Penna, Nigel T.; Li, Zhenhong

    2017-02-01

    Pointwise GPS measurements of tropospheric zenith total delay can be interpolated to provide high-resolution water vapor maps which may be used for correcting synthetic aperture radar images, for numeral weather prediction, and for correcting Network Real-time Kinematic GPS observations. Several previous studies have addressed the importance of the elevation dependency of water vapor, but it is often a challenge to separate elevation-dependent tropospheric delays from turbulent components. In this paper, we present an iterative tropospheric decomposition interpolation model that decouples the elevation and turbulent tropospheric delay components. For a 150 km × 150 km California study region, we estimate real-time mode zenith total delays at 41 GPS stations over 1 year by using the precise point positioning technique and demonstrate that the decoupled interpolation model generates improved high-resolution tropospheric delay maps compared with previous tropospheric turbulence- and elevation-dependent models. Cross validation of the GPS zenith total delays yields an RMS error of 4.6 mm with the decoupled interpolation model, compared with 8.4 mm with the previous model. On converting the GPS zenith wet delays to precipitable water vapor and interpolating to 1 km grid cells across the region, validations with the Moderate Resolution Imaging Spectroradiometer near-IR water vapor product show 1.7 mm RMS differences by using the decoupled model, compared with 2.0 mm for the previous interpolation model. Such results are obtained without differencing the tropospheric delays or water vapor estimates in time or space, while the errors are similar over flat and mountainous terrains, as well as for both inland and coastal areas.

  3. A Multidimensional Computerized Adaptive Short-Form Quality of Life Questionnaire Developed and Validated for Multiple Sclerosis: The MusiQoL-MCAT.

    PubMed

    Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent

    2016-04-01

    The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis.A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback-Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE).The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory.The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice.

  4. A Multidimensional Computerized Adaptive Short-Form Quality of Life Questionnaire Developed and Validated for Multiple Sclerosis

    PubMed Central

    Michel, Pierre; Baumstarck, Karine; Ghattas, Badih; Pelletier, Jean; Loundou, Anderson; Boucekine, Mohamed; Auquier, Pascal; Boyer, Laurent

    2016-01-01

    Abstract The aim was to develop a multidimensional computerized adaptive short-form questionnaire, the MusiQoL-MCAT, from a fixed-length QoL questionnaire for multiple sclerosis. A total of 1992 patients were enrolled in this international cross-sectional study. The development of the MusiQoL-MCAT was based on the assessment of between-items MIRT model fit followed by real-data simulations. The MCAT algorithm was based on Bayesian maximum a posteriori estimation of latent traits and Kullback–Leibler information item selection. We examined several simulations based on a fixed number of items. Accuracy was assessed using correlations (r) between initial IRT scores and MCAT scores. Precision was assessed using the standard error measurement (SEM) and the root mean square error (RMSE). The multidimensional graded response model was used to estimate item parameters and IRT scores. Among the MCAT simulations, the 16-item version of the MusiQoL-MCAT was selected because the accuracy and precision became stable with 16 items with satisfactory levels (r ≥ 0.9, SEM ≤ 0.55, and RMSE ≤ 0.3). External validity of the MusiQoL-MCAT was satisfactory. The MusiQoL-MCAT presents satisfactory properties and can individually tailor QoL assessment to each patient, making it less burdensome to patients and better adapted for use in clinical practice. PMID:27057832

  5. The role of a posteriori mathematics in physics

    NASA Astrophysics Data System (ADS)

    MacKinnon, Edward

    2018-05-01

    The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.

  6. Existence and instability of steady states for a triangular cross-diffusion system: A computer-assisted proof

    NASA Astrophysics Data System (ADS)

    Breden, Maxime; Castelli, Roberto

    2018-05-01

    In this paper, we present and apply a computer-assisted method to study steady states of a triangular cross-diffusion system. Our approach consist in an a posteriori validation procedure, that is based on using a fixed point argument around a numerically computed solution, in the spirit of the Newton-Kantorovich theorem. It allows to prove the existence of various non homogeneous steady states for different parameter values. In some situations, we obtain as many as 13 coexisting steady states. We also apply the a posteriori validation procedure to study the linear stability of the obtained steady states, proving that many of them are in fact unstable.

  7. Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.

    PubMed

    Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica

    2018-05-01

    We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Measuring Disability: Comparing the Impact of Two Data Collection Approaches on Disability Rates

    PubMed Central

    Sabariego, Carla; Oberhauser, Cornelia; Posarac, Aleksandra; Bickenbach, Jerome; Kostanjsek, Nenad; Chatterji, Somnath; Officer, Alana; Coenen, Michaela; Chhan, Lay; Cieza, Alarcos

    2015-01-01

    The usual approach in disability surveys is to screen persons with disability upfront and then ask questions about everyday problems. The objectives of this paper are to demonstrate the impact of screeners on disability rates, to challenge the usual exclusion of persons with mild and moderate disability from disability surveys and to demonstrate the advantage of using an a posteriori cut-off. Using data of a pilot study of the WHO Model Disability Survey (MDS) in Cambodia and the polytomous Rasch model, metric scales of disability were built. The conventional screener approach based on the short disability module of the Washington City Group and the a posteriori cut-off method described in the World Disability Report were compared regarding disability rates. The screener led to imprecise rates and classified persons with mild to moderate disability as non-disabled, although these respondents already experienced important problems in daily life. The a posteriori cut-off applied to the general population sample led to a more precise disability rate and allowed for a differentiation of the performance and needs of persons with mild, moderate and severe disability. This approach can be therefore considered as an inclusive approach suitable to monitor the Convention on the Rights of Persons with Disabilities. PMID:26308039

  9. A Method for Assessing Ground-Truth Accuracy of the 5DCT Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, Tai H., E-mail: tdou@mednet.ucla.edu; Thomas, David H.; O'Connell, Dylan P.

    2015-11-15

    Purpose: To develop a technique that assesses the accuracy of the breathing phase-specific volume image generation process by patient-specific breathing motion model using the original free-breathing computed tomographic (CT) scans as ground truths. Methods: Sixteen lung cancer patients underwent a previously published protocol in which 25 free-breathing fast helical CT scans were acquired with a simultaneous breathing surrogate. A patient-specific motion model was constructed based on the tissue displacements determined by a state-of-the-art deformable image registration. The first image was arbitrarily selected as the reference image. The motion model was used, along with the free-breathing phase information of the originalmore » 25 image datasets, to generate a set of deformation vector fields that mapped the reference image to the 24 nonreference images. The high-pitch helically acquired original scans served as ground truths because they captured the instantaneous tissue positions during free breathing. Image similarity between the simulated and the original scans was assessed using deformable registration that evaluated the pointwise discordance throughout the lungs. Results: Qualitative comparisons using image overlays showed excellent agreement between the simulated images and the original images. Even large 2-cm diaphragm displacements were very well modeled, as was sliding motion across the lung–chest wall boundary. The mean error across the patient cohort was 1.15 ± 0.37 mm, and the mean 95th percentile error was 2.47 ± 0.78 mm. Conclusion: The proposed ground truth–based technique provided voxel-by-voxel accuracy analysis that could identify organ-specific or tumor-specific motion modeling errors for treatment planning. Despite a large variety of breathing patterns and lung deformations during the free-breathing scanning session, the 5-dimensionl CT technique was able to accurately reproduce the original helical CT scans, suggesting its applicability to a wide range of patients.« less

  10. Graphical Construction of a Local Perspective on Differentiation and Integration

    ERIC Educational Resources Information Center

    Hong, Ye Yoon; Thomas, Michael O. J.

    2015-01-01

    Recent studies of the transition from school to university mathematics have identified a number of epistemological gaps, including the need to change from an emphasis on equality to that of inequality. Another crucial epistemological change during this transition involves the movement from the pointwise and global perspectives of functions usually…

  11. Stress-induced alterations of left-right electrodermal activity coupling indexed by pointwise transinformation.

    PubMed

    Světlák, M; Bob, P; Roman, R; Ježek, S; Damborská, A; Chládek, J; Shaw, D J; Kukleta, M

    2013-01-01

    In this study, we tested the hypothesis that experimental stress induces a specific change of left-right electrodermal activity (EDA) coupling pattern, as indexed by pointwise transinformation (PTI). Further, we hypothesized that this change is associated with scores on psychometric measures of the chronic stress-related psychopathology. Ninety-nine university students underwent bilateral measurement of EDA during rest and stress-inducing Stroop test and completed a battery of self-report measures of chronic stress-related psychopathology. A significant decrease in the mean PTI value was the prevalent response to the stress conditions. No association between chronic stress and PTI was found. Raw scores of psychometric measures of stress-related psychopathology had no effect on either the resting levels of PTI or the amount of stress-induced PTI change. In summary, acute stress alters the level of coupling pattern of cortico-autonomic influences on the left and right sympathetic pathways to the palmar sweat glands. Different results obtained using the PTI, EDA laterality coefficient, and skin conductance level also show that the PTI algorithm represents a new analytical approach to EDA asymmetry description.

  12. Continuity properties of the semi-group and its integral kernel in non-relativistic QED

    NASA Astrophysics Data System (ADS)

    Matte, Oliver

    2016-07-01

    Employing recent results on stochastic differential equations associated with the standard model of non-relativistic quantum electrodynamics by B. Güneysu, J. S. Møller, and the present author, we study the continuity of the corresponding semi-group between weighted vector-valued Lp-spaces, continuity properties of elements in the range of the semi-group, and the pointwise continuity of an operator-valued semi-group kernel. We further discuss the continuous dependence of the semi-group and its integral kernel on model parameters. All these results are obtained for Kato decomposable electrostatic potentials and the actual assumptions on the model are general enough to cover the Nelson model as well. As a corollary, we obtain some new pointwise exponential decay and continuity results on elements of low-energetic spectral subspaces of atoms or molecules that also take spin into account. In a simpler situation where spin is neglected, we explain how to verify the joint continuity of positive ground state eigenvectors with respect to spatial coordinates and model parameters. There are no smallness assumptions imposed on any model parameter.

  13. Structure-Preserving Variational Multiscale Modeling of Turbulent Incompressible Flow with Subgrid Vortices

    NASA Astrophysics Data System (ADS)

    Evans, John; Coley, Christopher; Aronson, Ryan; Nelson, Corey

    2017-11-01

    In this talk, a large eddy simulation methodology for turbulent incompressible flow will be presented which combines the best features of divergence-conforming discretizations and the residual-based variational multiscale approach to large eddy simulation. In this method, the resolved motion is represented using a divergence-conforming discretization, that is, a discretization that preserves the incompressibility constraint in a pointwise manner, and the unresolved fluid motion is explicitly modeled by subgrid vortices that lie within individual grid cells. The evolution of the subgrid vortices is governed by dynamical model equations driven by the residual of the resolved motion. Consequently, the subgrid vortices appropriately vanish for laminar flow and fully resolved turbulent flow. As the resolved velocity field and subgrid vortices are both divergence-free, the methodology conserves mass in a pointwise sense and admits discrete balance laws for energy, enstrophy, and helicity. Numerical results demonstrate the methodology yields improved results versus state-of-the-art eddy viscosity models in the context of transitional, wall-bounded, and rotational flow when a divergence-conforming B-spline discretization is utilized to represent the resolved motion.

  14. Ground-based remote sensing of tropospheric water vapour isotopologues within the project MUSICA

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Barthlott, S.; Hase, F.; González, Y.; Yoshimura, K.; García, O. E.; Sepúlveda, E.; Gomez-Pelaez, A.; Gisi, M.; Kohlhepp, R.; Dohe, S.; Blumenstock, T.; Strong, K.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Demoulin, P.; Jones, N.; Griffith, D. W. T.; Smale, D.; Robinson, J.

    2012-08-01

    Within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water), long-term tropospheric water vapour isotopologues data records are provided for ten globally distributed ground-based mid-infrared remote sensing stations of the NDACC (Network for the Detection of Atmospheric Composition Change). We present a new method allowing for an extensive and straightforward characterisation of the complex nature of such isotopologue remote sensing datasets. We demonstrate that the MUSICA humidity profiles are representative for most of the troposphere with a vertical resolution ranging from about 2 km (in the lower troposphere) to 8 km (in the upper troposphere) and with an estimated precision of better than 10%. We find that the sensitivity with respect to the isotopologue composition is limited to the lower and middle troposphere, whereby we estimate a precision of about 30‰ for the ratio between the two isotopologues HD16O and H216O. The measurement noise, the applied atmospheric temperature profiles, the uncertainty in the spectral baseline, and interferences from humidity are the leading error sources. We introduce an a posteriori correction method of the humidity interference error and we recommend applying it for isotopologue ratio remote sensing datasets in general. In addition, we present mid-infrared CO2 retrievals and use them for demonstrating the MUSICA network-wide data consistency. In order to indicate the potential of long-term isotopologue remote sensing data if provided with a well-documented quality, we present a climatology and compare it to simulations of an isotope incorporated AGCM (Atmospheric General Circulation Model). We identify differences in the multi-year mean and seasonal cycles that significantly exceed the estimated errors, thereby indicating deficits in the modeled atmospheric water cycle.

  15. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  16. Using psychophysics to ask if the brain samples or maximizes

    PubMed Central

    Acuna, Daniel E.; Berniker, Max; Fernandes, Hugo L.; Kording, Konrad P.

    2015-01-01

    The two-alternative forced-choice (2AFC) task is the workhorse of psychophysics and is used to measure the just-noticeable difference, generally assumed to accurately quantify sensory precision. However, this assumption is not true for all mechanisms of decision making. Here we derive the behavioral predictions for two popular mechanisms, sampling and maximum a posteriori, and examine how they affect the outcome of the 2AFC task. These predictions are used in a combined visual 2AFC and estimation experiment. Our results strongly suggest that subjects use a maximum a posteriori mechanism. Further, our derivations and experimental paradigm establish the already standard 2AFC task as a behavioral tool for measuring how humans make decisions under uncertainty. PMID:25767093

  17. MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Changho; Yang, Won Sik

    This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less

  18. A Genomewide Linkage Scan of Cocaine Dependence and Major Depressive Episode in Two Populations

    PubMed Central

    Yang, Bao-Zhu; Han, Shizhong; Kranzler, Henry R; Farrer, Lindsay A; Gelernter, Joel

    2011-01-01

    Cocaine dependence (CD) and major depressive episode (MDE) frequently co-occur with poorer treatment outcome and higher relapse risk. Shared genetic risk was affirmed; to date, there have been no reports of genomewide linkage scans (GWLSs) surveying the susceptibility regions for comorbid CD and MDE (CD–MDE). We aimed to identify chromosomal regions and candidate genes susceptible to CD, MDE, and CD–MDE in African Americans (AAs) and European Americans (EAs). A total of 1896 individuals were recruited from 384 AA and 355 EA families, each with at least a sibling-pair with CD and/or opioid dependence. Array-based genotyping of about 6000 single-nucleotide polymorphisms was completed for all individuals. Parametric and non-parametric genomewide linkage analyses were performed. We found a genomewide-significant linkage peak on chromosome 7 at 183.4 cM for non-parametric analysis of CD–MDE in AAs (lod=3.8, genomewide empirical p=0.016; point-wise p=0.00001). A nearly genomewide significant linkage was identified for CD–MDE in EAs on chromosome 5 at 14.3 cM (logarithm of odds (lod)=2.95, genomewide empirical p=0.055; point-wise p=0.00012). Parametric analysis corroborated the findings in these two regions and improved the support for the peak on chromosome 5 so that it reached genomewide significance (heterogeneity lod=3.28, genomewide empirical p=0.046; point-wise p=0.00053). This is the first GWLS for CD–MDE. The genomewide significant linkage regions on chromosomes 5 and 7 harbor four particularly promising candidate genes: SRD5A1, UBE3C, PTPRN2, and VIPR2. Replication of the linkage findings in other populations is warranted, as is a focused analysis of the genes located in the linkage regions implicated here. PMID:21849985

  19. Uncertainty in biological monitoring: a framework for data collection and analysis to account for multiple sources of sampling bias

    USGS Publications Warehouse

    Ruiz-Gutierrez, Viviana; Hooten, Melvin B.; Campbell Grant, Evan H.

    2016-01-01

    Biological monitoring programmes are increasingly relying upon large volumes of citizen-science data to improve the scope and spatial coverage of information, challenging the scientific community to develop design and model-based approaches to improve inference.Recent statistical models in ecology have been developed to accommodate false-negative errors, although current work points to false-positive errors as equally important sources of bias. This is of particular concern for the success of any monitoring programme given that rates as small as 3% could lead to the overestimation of the occurrence of rare events by as much as 50%, and even small false-positive rates can severely bias estimates of occurrence dynamics.We present an integrated, computationally efficient Bayesian hierarchical model to correct for false-positive and false-negative errors in detection/non-detection data. Our model combines independent, auxiliary data sources with field observations to improve the estimation of false-positive rates, when a subset of field observations cannot be validated a posteriori or assumed as perfect. We evaluated the performance of the model across a range of occurrence rates, false-positive and false-negative errors, and quantity of auxiliary data.The model performed well under all simulated scenarios, and we were able to identify critical auxiliary data characteristics which resulted in improved inference. We applied our false-positive model to a large-scale, citizen-science monitoring programme for anurans in the north-eastern United States, using auxiliary data from an experiment designed to estimate false-positive error rates. Not correcting for false-positive rates resulted in biased estimates of occupancy in 4 of the 10 anuran species we analysed, leading to an overestimation of the average number of occupied survey routes by as much as 70%.The framework we present for data collection and analysis is able to efficiently provide reliable inference for occurrence patterns using data from a citizen-science monitoring programme. However, our approach is applicable to data generated by any type of research and monitoring programme, independent of skill level or scale, when effort is placed on obtaining auxiliary information on false-positive rates.

  20. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment.

    PubMed

    Kreich, Eliane Maria; Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-03-01

    This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment.

  1. Estimating and Separating Noise from AIA Images

    NASA Astrophysics Data System (ADS)

    Kirk, Michael S.; Ireland, Jack; Young, C. Alex; Pesnell, W. Dean

    2016-10-01

    All digital images are corrupted by noise and SDO AIA is no different. In most solar imaging, we have the luxury of high photon counts and low background contamination, which when combined with carful calibration, minimize much of the impact noise has on the measurement. Outside high-intensity regions, such as in coronal holes, the noise component can become significant and complicate feature recognition and segmentation. We create a practical estimate of noise in the high-resolution AIA images across the detector CCD in all seven EUV wavelengths. A mixture of Poisson and Gaussian noise is well suited in the digital imaging environment due to the statistical distributions of photons and the characteristics of the CCD. Using state-of-the-art noise estimation techniques, the publicly available solar images, and coronal loop simulations; we construct a maximum-a-posteriori assessment of the error in these images. The estimation and mitigation of noise not only provides a clearer view of large-scale solar structure in the solar corona, but also provides physical constraints on fleeting EUV features observed with AIA.

  2. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  3. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  4. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  5. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  6. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  7. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  8. A modified beam-to-earth transformation to measure short-wavelength internal waves with an acoustic Doppler current profiler

    USGS Publications Warehouse

    Scotti, A.; Butman, B.; Beardsley, R.C.; Alexander, P.S.; Anderson, S.

    2005-01-01

    The algorithm used to transform velocity signals from beam coordinates to earth coordinates in an acoustic Doppler current profiler (ADCP) relies on the assumption that the currents are uniform over the horizontal distance separating the beams. This condition may be violated by (nonlinear) internal waves, which can have wavelengths as small as 100-200 m. In this case, the standard algorithm combines velocities measured at different phases of a wave and produces horizontal velocities that increasingly differ from true velocities with distance from the ADCP. Observations made in Massachusetts Bay show that currents measured with a bottom-mounted upward-looking ADCP during periods when short-wavelength internal waves are present differ significantly from currents measured by point current meters, except very close to the instrument. These periods are flagged with high error velocities by the standard ADCP algorithm. In this paper measurements from the four spatially diverging beams and the backscatter intensity signal are used to calculate the propagation direction and celerity of the internal waves. Once this information is known, a modified beam-to-earth transformation that combines appropriately lagged beam measurements can be used to obtain current estimates in earth coordinates that compare well with pointwise measurements. ?? 2005 American Meteorological Society.

  9. A spatial domain decomposition approach to distributed H ∞ observer design of a linear unstable parabolic distributed parameter system with spatially discrete sensors

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Wei; Liu, Ya-Qiang; Hu, Yan-Yan; Sun, Chang-Yin

    2017-12-01

    This paper discusses the design problem of distributed H∞ Luenberger-type partial differential equation (PDE) observer for state estimation of a linear unstable parabolic distributed parameter system (DPS) with external disturbance and measurement disturbance. Both pointwise measurement in space and local piecewise uniform measurement in space are considered; that is, sensors are only active at some specified points or applied at part thereof of the spatial domain. The spatial domain is decomposed into multiple subdomains according to the location of the sensors such that only one sensor is located at each subdomain. By using Lyapunov technique, Wirtinger's inequality at each subdomain, and integration by parts, a Lyapunov-based design of Luenberger-type PDE observer is developed such that the resulting estimation error system is exponentially stable with an H∞ performance constraint, and presented in terms of standard linear matrix inequalities (LMIs). For the case of local piecewise uniform measurement in space, the first mean value theorem for integrals is utilised in the observer design development. Moreover, the problem of optimal H∞ observer design is also addressed in the sense of minimising the attenuation level. Numerical simulation results are presented to show the satisfactory performance of the proposed design method.

  10. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  11. Receiver Operating Characteristic Analysis for Classification Based on Various Prior Probabilities of Groups with an Application to Breath Analysis

    NASA Astrophysics Data System (ADS)

    Cimermanová, K.

    2009-01-01

    In this paper we illustrate the influence of prior probabilities of diseases on diagnostic reasoning. For various prior probabilities of classified groups characterized by volatile organic compounds of breath profile, smokers and non-smokers, we constructed the ROC curve and the Youden index with related asymptotic pointwise confidence intervals.

  12. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  13. Regularity for Fully Nonlinear Elliptic Equations with Oblique Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Zhang, Kai

    2018-06-01

    In this paper, we obtain a series of regularity results for viscosity solutions of fully nonlinear elliptic equations with oblique derivative boundary conditions. In particular, we derive the pointwise C α, C 1,α and C 2,α regularity. As byproducts, we also prove the A-B-P maximum principle, Harnack inequality, uniqueness and solvability of the equations.

  14. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  15. Excel-Based Tool for Pharmacokinetically Guided Dose Adjustment of Paclitaxel.

    PubMed

    Kraff, Stefanie; Lindauer, Andreas; Joerger, Markus; Salamone, Salvatore J; Jaehde, Ulrich

    2015-12-01

    Neutropenia is a frequent and severe adverse event in patients receiving paclitaxel chemotherapy. The time above a paclitaxel threshold concentration of 0.05 μmol/L (Tc > 0.05 μmol/L) is a strong predictor for paclitaxel-associated neutropenia and has been proposed as a target pharmacokinetic (PK) parameter for paclitaxel therapeutic drug monitoring and dose adaptation. Up to now, individual Tc > 0.05 μmol/L values are estimated based on a published PK model of paclitaxel by using the software NONMEM. Because many clinicians are not familiar with the use of NONMEM, an Excel-based dosing tool was developed to allow calculation of paclitaxel Tc > 0.05 μmol/L and give clinicians an easy-to-use tool. Population PK parameters of paclitaxel were taken from a published PK model. An Alglib VBA code was implemented in Excel 2007 to compute differential equations for the paclitaxel PK model. Maximum a posteriori Bayesian estimates of the PK parameters were determined with the Excel Solver using individual drug concentrations. Concentrations from 250 patients were simulated receiving 1 cycle of paclitaxel chemotherapy. Predictions of paclitaxel Tc > 0.05 μmol/L as calculated by the Excel tool were compared with NONMEM, whereby maximum a posteriori Bayesian estimates were obtained using the POSTHOC function. There was a good concordance and comparable predictive performance between Excel and NONMEM regarding predicted paclitaxel plasma concentrations and Tc > 0.05 μmol/L values. Tc > 0.05 μmol/L had a maximum bias of 3% and an error on precision of <12%. The median relative deviation of the estimated Tc > 0.05 μmol/L values between both programs was 1%. The Excel-based tool can estimate the time above a paclitaxel threshold concentration of 0.05 μmol/L with acceptable accuracy and precision. The presented Excel tool allows reliable calculation of paclitaxel Tc > 0.05 μmol/L and thus allows target concentration intervention to improve the benefit-risk ratio of the drug. The easy use facilitates therapeutic drug monitoring in clinical routine.

  16. Allowing for MSD prevention during facilities planning for a public service: an a posteriori analysis of 10 library design projects.

    PubMed

    Bellemare, Marie; Trudel, Louis; Ledoux, Elise; Montreuil, Sylvie; Marier, Micheline; Laberge, Marie; Vincent, Patrick

    2006-01-01

    Research was conducted to identify an ergonomics-based intervention model designed to factor in musculoskeletal disorder (MSD) prevention when library projects are being designed. The first stage of the research involved an a posteriori analysis of 10 recent redesign projects. The purpose of the analysis was to document perceptions about the attention given to MSD prevention measures over the course of a project on the part of 2 categories of employees: librarians responsible for such projects and personnel working in the libraries before and after changes. Subjects were interviewed in focus groups. Outcomes of the analysis can guide our ergonomic assessment of current situations and contribute to a better understanding of the way inclusion or improvement of prevention measures can support the workplace design process.

  17. Applicability of Kerker preconditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng

    2018-03-01

    The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.

  18. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment

    PubMed Central

    Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-01-01

    Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635

  19. A priori and a posteriori approaches for finding genes of evolutionary interest in non-model species: osmoregulatory genes in the kidney transcriptome of the desert rodent Dipodomys spectabilis (banner-tailed kangaroo rat).

    PubMed

    Marra, Nicholas J; Eo, Soo Hyung; Hale, Matthew C; Waser, Peter M; DeWoody, J Andrew

    2012-12-01

    One common goal in evolutionary biology is the identification of genes underlying adaptive traits of evolutionary interest. Recently next-generation sequencing techniques have greatly facilitated such evolutionary studies in species otherwise depauperate of genomic resources. Kangaroo rats (Dipodomys sp.) serve as exemplars of adaptation in that they inhabit extremely arid environments, yet require no drinking water because of ultra-efficient kidney function and osmoregulation. As a basis for identifying water conservation genes in kangaroo rats, we conducted a priori bioinformatics searches in model rodents (Mus musculus and Rattus norvegicus) to identify candidate genes with known or suspected osmoregulatory function. We then obtained 446,758 reads via 454 pyrosequencing to characterize genes expressed in the kidney of banner-tailed kangaroo rats (Dipodomys spectabilis). We also determined candidates a posteriori by identifying genes that were overexpressed in the kidney. The kangaroo rat sequences revealed nine different a priori candidate genes predicted from our Mus and Rattus searches, as well as 32 a posteriori candidate genes that were overexpressed in kidney. Mutations in two of these genes, Slc12a1 and Slc12a3, cause human renal diseases that result in the inability to concentrate urine. These genes are likely key determinants of physiological water conservation in desert rodents. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. A discontinuous Galerkin method with a bound preserving limiter for the advection of non-diffusive fields in solid Earth geodynamics

    NASA Astrophysics Data System (ADS)

    He, Ying; Puckett, Elbridge Gerry; Billen, Magali I.

    2017-02-01

    Mineral composition has a strong effect on the properties of rocks and is an essentially non-diffusive property in the context of large-scale mantle convection. Due to the non-diffusive nature and the origin of compositionally distinct regions in the Earth the boundaries between distinct regions can be nearly discontinuous. While there are different methods for tracking rock composition in numerical simulations of mantle convection, one must consider trade-offs between computational cost, accuracy or ease of implementation when choosing an appropriate method. Existing methods can be computationally expensive, cause over-/undershoots, smear sharp boundaries, or are not easily adapted to tracking multiple compositional fields. Here we present a Discontinuous Galerkin method with a bound preserving limiter (abbreviated as DG-BP) using a second order Runge-Kutta, strong stability-preserving time discretization method for the advection of non-diffusive fields. First, we show that the method is bound-preserving for a point-wise divergence free flow (e.g., a prescribed circular flow in a box). However, using standard adaptive mesh refinement (AMR) there is an over-shoot error (2%) because the cell average is not preserved during mesh coarsening. The effectiveness of the algorithm for convection-dominated flows is demonstrated using the falling box problem. We find that the DG-BP method maintains sharper compositional boundaries (3-5 elements) as compared to an artificial entropy-viscosity method (6-15 elements), although the over-/undershoot errors are similar. When used with AMR the DG-BP method results in fewer degrees of freedom due to smaller regions of mesh refinement in the neighborhood of the discontinuity. However, using Taylor-Hood elements and a uniform mesh there is an over-/undershoot error on the order of 0.0001%, but this error increases to 0.01-0.10% when using AMR. Therefore, for research problems in which a continuous field method is desired the DG-BP method can provide improved tracking of sharp compositional boundaries. For applications in which strict bound-preserving behavior is desired, use of an element that provides a divergence-free condition on the weak formulation (e.g., Raviart-Thomas) and an improved mesh coarsening scheme for the AMR are required.

  1. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  2. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  3. Assessment of multireference approaches to explicitly correlated full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart

    2016-08-07

    The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less

  4. Dietary patterns and risk of colorectal adenoma: a systematic review and meta-analysis of observational studies.

    PubMed

    Godos, J; Bella, F; Torrisi, A; Sciacca, S; Galvano, F; Grosso, G

    2016-12-01

    Current evidence suggests that dietary patterns may play an important role in colorectal cancer risk. The present study aimed to perform a systematic review and meta-analysis of observational studies exploring the association between dietary patterns and colorectal adenomas (a precancerous condition). Pubmed and EMBASE electronic databases were systematically searched to retrieve eligible studies. Only studies exploring the risk or association with colorectal adenomas for the highest versus lowest category of exposure to a posteriori dietary patterns were included in the quantitative analysis. Random-effects models were applied to calculate relative risks (RRs) of colorectal adenomas for high adherence to healthy or unhealthy dietary patterns. Statistical heterogeneity and publication bias were explored. Twelve studies were reviewed. Three studies explored a priori dietary patterns using scores identifying adherence to the Mediterranean, Paleolithic and Dietary Approaches to Stop Hypertension (DASH) diet and reported an association with decreased colorectal adenoma risk. Two studies tested the association with colorectal adenomas between a posteriori dietary patterns showing lower odds of disease related to plant-based compared to meat-based dietary patterns. Seven studies identified 23 a posteriori dietary patterns and the analysis revealed that higher adherence to healthy and unhealthy dietary patterns was significantly associated risk of colorectal adenomas (RR = 0.81, 95% confidence interval = 0.71, 0.94 and RR = 1.24, 95% confidence interval = 1.13, 1.35, respectively) with no evidence of heterogeneity or publication bias. The results of this systematic review and meta-analysis indicate that dietary patterns may be associated with the risk of colorectal adenomas. © 2016 The British Dietetic Association Ltd.

  5. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  6. Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.

    1997-01-01

    A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.

  7. Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem

    NASA Astrophysics Data System (ADS)

    Lapin, A.; Romanenko, A.

    2016-11-01

    The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.

  8. Chaos in the brain: imaging via chaoticity of EEG/MEG signals

    NASA Astrophysics Data System (ADS)

    Kowalik, Zbigniew J.; Elbert, Thomas; Rockstroh, Brigitte; Hoke, Manfried

    1995-03-01

    Brain electro- (EEG) or magnetoencephalogram (MEG) can be analyzed by using methods of the nonlinear system theory. We show that even for very short and nonstationary time series it is possible to functionally differentiate various brain activities. Usually the analysis assumes that the analyzed signals are both long and stationary, so that the classic spectral methods can be used. Even more convincing results can be obtained under these circumstances when the dimensional analysis or estimation of the Kolmogorov entropy or the Lyapunov exponent are performed. When measuring the spontaneous activity of a human brain the assumption of stationarity is questionable and `static' methods (correlation dimension, entropy, etc.) are then not adequate. In this case `dynamic' methods like pointwise-D2 dimension or chaoticity measures should be applied. Predictability measures in the form of local Lyapunov exponents are capable of revealing directly the chaoticity of a given process, and can practically be applied for functional differentiation of brain activity. We exemplify these in cases of apallic syndrome, tinnitus and schizophrenia. We show that: the average chaoticity in apallic syndrome differentiates brain states both in space and time, chaoticity changes temporally in case of schizophrenia (critical jumps of chaoticity), chaoticity changes locally in space, i.e., in the cortex plane in case of tinnitus.

  9. Mixed mimetic spectral element method for Stokes flow: A pointwise divergence-free solution

    NASA Astrophysics Data System (ADS)

    Kreeft, Jasper; Gerritsma, Marc

    2013-05-01

    In this paper we apply the recently developed mimetic discretization method to the mixed formulation of the Stokes problem in terms of vorticity, velocity and pressure. The mimetic discretization presented in this paper and in Kreeft et al. [51] is a higher-order method for curvilinear quadrilaterals and hexahedrals. Fundamental is the underlying structure of oriented geometric objects, the relation between these objects through the boundary operator and how this defines the exterior derivative, representing the grad, curl and div, through the generalized Stokes theorem. The mimetic method presented here uses the language of differential k-forms with k-cochains as their discrete counterpart, and the relations between them in terms of the mimetic operators: reduction, reconstruction and projection. The reconstruction consists of the recently developed mimetic spectral interpolation functions. The most important result of the mimetic framework is the commutation between differentiation at the continuous level with that on the finite dimensional and discrete level. As a result operators like gradient, curl and divergence are discretized exactly. For Stokes flow, this implies a pointwise divergence-free solution. This is confirmed using a set of test cases on both Cartesian and curvilinear meshes. It will be shown that the method converges optimally for all admissible boundary conditions.

  10. Linkage Analysis of a Model Quantitative Trait in Humans: Finger Ridge Count Shows Significant Multivariate Linkage to 5q14.1

    PubMed Central

    Medland, Sarah E; Loesch, Danuta Z; Mdzewski, Bogdan; Zhu, Gu; Montgomery, Grant W; Martin, Nicholas G

    2007-01-01

    The finger ridge count (a measure of pattern size) is one of the most heritable complex traits studied in humans and has been considered a model human polygenic trait in quantitative genetic analysis. Here, we report the results of the first genome-wide linkage scan for finger ridge count in a sample of 2,114 offspring from 922 nuclear families. Both univariate linkage to the absolute ridge count (a sum of all the ridge counts on all ten fingers), and multivariate linkage analyses of the counts on individual fingers, were conducted. The multivariate analyses yielded significant linkage to 5q14.1 (Logarithm of odds [LOD] = 3.34, pointwise-empirical p-value = 0.00025) that was predominantly driven by linkage to the ring, index, and middle fingers. The strongest univariate linkage was to 1q42.2 (LOD = 2.04, point-wise p-value = 0.002, genome-wide p-value = 0.29). In summary, the combination of univariate and multivariate results was more informative than simple univariate analyses alone. Patterns of quantitative trait loci factor loadings consistent with developmental fields were observed, and the simple pleiotropic model underlying the absolute ridge count was not sufficient to characterize the interrelationships between the ridge counts of individual fingers. PMID:17907812

  11. A guide to multi-objective optimization for ecological problems with an application to cackling goose management

    USGS Publications Warehouse

    Williams, Perry J.; Kendall, William L.

    2017-01-01

    Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.

  12. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  13. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  14. Adaptive finite element modelling of three-dimensional magnetotelluric fields in general anisotropic media

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Xu, Zhenhuan; Li, Yuguo

    2018-04-01

    We present a goal-oriented adaptive finite element (FE) modelling algorithm for 3-D magnetotelluric fields in generally anisotropic conductivity media. The model consists of a background layered structure, containing anisotropic blocks. Each block and layer might be anisotropic by assigning to them 3 × 3 conductivity tensors. The second-order partial differential equations are solved using the adaptive finite element method (FEM). The computational domain is subdivided into unstructured tetrahedral elements, which allow for complex geometries including bathymetry and dipping interfaces. The grid refinement process is guided by a global posteriori error estimator and is performed iteratively. The system of linear FE equations for electric field E is solved with a direct solver MUMPS. Then the magnetic field H can be found, in which the required derivatives are computed numerically using cubic spline interpolation. The 3-D FE algorithm has been validated by comparisons with both the 3-D finite-difference solution and 2-D FE results. Two model types are used to demonstrate the effects of anisotropy upon 3-D magnetotelluric responses: horizontal and dipping anisotropy. Finally, a 3D sea hill model is modelled to study the effect of oblique interfaces and the dipping anisotropy.

  15. H-P adaptive methods for finite element analysis of aerothermal loads in high-speed flows

    NASA Technical Reports Server (NTRS)

    Chang, H. J.; Bass, J. M.; Tworzydlo, W.; Oden, J. T.

    1993-01-01

    The commitment to develop the National Aerospace Plane and Maneuvering Reentry Vehicles has generated resurgent interest in the technology required to design structures for hypersonic flight. The principal objective of this research and development effort has been to formulate and implement a new class of computational methodologies for accurately predicting fine scale phenomena associated with this class of problems. The initial focus of this effort was to develop optimal h-refinement and p-enrichment adaptive finite element methods which utilize a-posteriori estimates of the local errors to drive the adaptive methodology. Over the past year this work has specifically focused on two issues which are related to overall performance of a flow solver. These issues include the formulation and implementation (in two dimensions) of an implicit/explicit flow solver compatible with the hp-adaptive methodology, and the design and implementation of computational algorithm for automatically selecting optimal directions in which to enrich the mesh. These concepts and algorithms have been implemented in a two-dimensional finite element code and used to solve three hypersonic flow benchmark problems (Holden Mach 14.1, Edney shock on shock interaction Mach 8.03, and the viscous backstep Mach 4.08).

  16. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  17. Comparison of soft-input-soft-output detection methods for dual-polarized quadrature duobinary system

    NASA Astrophysics Data System (ADS)

    Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan

    2018-02-01

    Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.

  18. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves)

    NASA Astrophysics Data System (ADS)

    Harmening, Corinna; Neuner, Hans

    2016-09-01

    Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.

  19. Energy balance and mass conservation in reduced order models of fluid flows

    NASA Astrophysics Data System (ADS)

    Mohebujjaman, Muhammad; Rebholz, Leo G.; Xie, Xuping; Iliescu, Traian

    2017-10-01

    In this paper, we investigate theoretically and computationally the conservation properties of reduced order models (ROMs) for fluid flows. Specifically, we investigate whether the ROMs satisfy the same (or similar) energy balance and mass conservation as those satisfied by the Navier-Stokes equations. All of our theoretical findings are illustrated and tested in numerical simulations of a 2D flow past a circular cylinder at a Reynolds number Re = 100. First, we investigate the ROM energy balance. We show that using the snapshot average for the centering trajectory (which is a popular treatment of nonhomogeneous boundary conditions in ROMs) yields an incorrect energy balance. Then, we propose a new approach, in which we replace the snapshot average with the Stokes extension. Theoretically, the Stokes extension produces an accurate energy balance. Numerically, the Stokes extension yields more accurate results than the standard snapshot average, especially for longer time intervals. Our second contribution centers around ROM mass conservation. We consider ROMs created using two types of finite elements: the standard Taylor-Hood (TH) element, which satisfies the mass conservation weakly, and the Scott-Vogelius (SV) element, which satisfies the mass conservation pointwise. Theoretically, the error estimates for the SV-ROM are sharper than those for the TH-ROM. Numerically, the SV-ROM yields significantly more accurate results, especially for coarser meshes and longer time intervals.

  20. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  1. Verification test of the SURF and SURFplus models in xRage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menikoff, Ralph

    2016-05-18

    As a verification test of the SURF and SURFplus models in the xRage code we use a propagating underdriven detonation wave in 1-D. This is about the only test cases for which an accurate solution can be determined based on the theoretical structure of the solution. The solution consists of a steady ZND reaction zone profile joined with a scale invariant rarefaction or Taylor wave and followed by a constant state. The end of the reaction profile and the head of the rarefaction coincide with the sonic CJ state of the detonation wave. The constant state is required to matchmore » a rigid wall boundary condition. For a test case, we use PBX 9502 with the same EOS and burn rate as previously used to test the shock detector algorithm utilized by the SURF model. The detonation wave is propagated for 10 μs (slightly under 80mm). As expected, the pointwise errors are largest in the neighborhood of discontinuities; pressure discontinuity at the lead shock front and pressure derivative discontinuities at the head and tail of the rarefaction. As a quantitative measure of the overall accuracy, the L2 norm of the difference of the numerical pressure and the exact solution is used. Results are presented for simulations using both a uniform grid and an adaptive grid that refines the reaction zone.« less

  2. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  3. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  4. Pointwise approximation of modified conjugate functions by matrix operators of conjugate Fourier series of [Formula: see text]-periodic functions.

    PubMed

    Kubiak, Mateusz; Łenski, Włodzimierz; Szal, Bogdan

    2018-01-01

    We extend the results of Xh. Z. Krasniqi (Acta Comment. Univ. Tartu Math. 17:89-101, 2013) and the authors (Acta Comment. Univ. Tartu Math. 13:11-24, 2009; Proc. Est. Acad. Sci. 67:50-60, 2018) to the case when considered function is [Formula: see text]-periodic and the measure of approximation depends on r -differences of the entries of the considered matrices.

  5. Towards an Optimal Noise Versus Resolution Trade-Off in Wind Scatterometry

    NASA Technical Reports Server (NTRS)

    Williams, Brent A.

    2011-01-01

    A scatterometer is a radar that measures the normalized radar cross section sigma(sup 0) of the Earth's surface. Over the ocean this signal is related to the wind via the geophysical model function (GMF). The objective of wind scatterometry is to estimate the wind vector field from sigma(sup 0) measurements; however, there are many subtleties that complicate this problem-making it difficult to obtain a unique wind field estimate. Conventionally, wind estimation is split into two stages: a wind retrieval stage in which several ambiguous solutions are obtained, and an ambiguity removal stage in which ambiguities are chosen to produce an appropriate wind vector field estimate. The most common approach to wind field estimation is to grid the scatterometer swath into wind vector cells and estimate wind vector ambiguities independently for each cell. Then, field wise structure is imposed on the solution by an ambiguity selection routine. Although this approach is simple and practical, it neglects field wise structure in the retrieval step and does not account for the spatial correlation imposed by the sampling. This makes it difficult to develop a theoretically appropriate noise versus resolution trade-off using pointwise retrieval. Fieldwise structure may be imposed in the retrieval step using a model-based approach. However, this approach is generally only practical if a low order wind field model is applied, which may discard more information than is desired. Furthermore, model-based approaches do not account for the structure imposed by the sampling. A more general fieldwise approach is to estimate all the wind vectors for all the WVCs simultaneously from all the measurements. This approach can account for structure of the wind field as well as structure imposed by the sampling in the wind retrieval step. Williams and Long in 2010 developed a fieldwise retrieval method based on maximum a posteriori estimation (MAP). This MAP approach can be extended to perform a noise versus resolution trade-off, and deal with ambiguity selection. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed. This paper extends the fieldwise MAP estimation approach and investigates both the noise versus resolution trade-off as well as ambiguity removal in the fieldwise wind retrieval step. The method is then applied to the Sea Winds scatterometer and the results are analyzed.

  6. Pharmacists' role in handling problems with prescriptions for antithrombotic medication in Belgian community pharmacies.

    PubMed

    Desmaele, S; De Wulf, I; Dupont, A G; Steurbaut, S

    2015-08-01

    Community pharmacists have an important task in the follow-up of patients treated with antithrombotics. When delivering these medicines, pharmacists can encounter drug-related problems (DRPs) with substantial clinical and economic impact. To investigate the amount and type of antithrombotic related DRPs as well as how community pharmacists handled these DRPs. Belgian community pharmacies. MSc pharmacy students of six Belgian universities collected data about all DRPs encountered by a pharmacist during ten half days of their pharmacy internship. Data were registered about DRPs detected at delivery and in an a posteriori setting, when consulting the medical history of the patient. Classification of the DRP, cause of the DRP, intervention and result of the intervention were registered. Amount and type of antotrombitocs related DRPs occurring in community pharmacies, as well as how community pharmacists handled these DRPs. 3.1 % of the 15,952 registered DRPs concerned antithrombotics. 79.3 % of these DRPs were detected at delivery and 20.7 % were detected a posteriori. Most antithrombotic-related DRPs concerned problems with the choice of the drug (mainly because of drug-drug interactions) or concerned logistic problems. Almost 80 % of the antithrombotic-related DRPs were followed by an intervention of the pharmacist, mainly at the patient's level, resulting in 90.1 % of these DRPs partially or totally solved. Different DRPs with antithrombotic medication occurred in Belgian community pharmacies. About 20 % was detected in an a posteriori setting, showing the benefit of medication review. Many of the encountered DRPs were of technical nature (60.7 %). These DRPs were time-consuming for the pharmacist to resolve and should be prevented. Most of the DRPs could be solved, demonstrating the added value of the community pharmacist as first line healthcare provider.

  7. Visual field progression with frequency-doubling matrix perimetry and standard automated perimetry in patients with glaucoma and in healthy controls.

    PubMed

    Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C

    2013-12-01

    A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.

  8. Aporrectodea caliginosa, a relevant earthworm species for a posteriori pesticide risk assessment: current knowledge and recommendations for culture and experimental design.

    PubMed

    Bart, Sylvain; Amossé, Joël; Lowe, Christopher N; Mougin, Christian; Péry, Alexandre R R; Pelosi, Céline

    2018-06-21

    Ecotoxicological tests with earthworms are widely used and are mandatory for the risk assessment of pesticides prior to registration and commercial use. The current model species for standardized tests is Eisenia fetida or Eisenia andrei. However, these species are absent from agricultural soils and often less sensitive to pesticides than other earthworm species found in mineral soils. To move towards a better assessment of pesticide effects on non-target organisms, there is a need to perform a posteriori tests using relevant species. The endogeic species Aporrectodea caliginosa (Savigny, 1826) is representative of cultivated fields in temperate regions and is suggested as a relevant model test species. After providing information on its taxonomy, biology, and ecology, we reviewed current knowledge concerning its sensitivity towards pesticides. Moreover, we highlighted research gaps and promising perspectives. Finally, advice and recommendations are given for the establishment of laboratory cultures and experiments using this soil-dwelling earthworm species.

  9. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    PubMed

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  10. Automatic Lung Lobe Segmentation Using Particles, Thin Plate Splines, and Maximum a Posteriori Estimation

    PubMed Central

    Ross, James C.; Estépar, Raúl San José; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K.; Washko, George R.

    2011-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases. PMID:20879396

  11. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  12. Effects of two classification strategies on a Benthic Community Index for streams in the Northern Lakes and Forests Ecoregion

    USGS Publications Warehouse

    Butcher, Jason T.; Stewart, Paul M.; Simon, Thomas P.

    2003-01-01

    Ninety-four sites were used to analyze the effects of two different classification strategies on the Benthic Community Index (BCI). The first, a priori classification, reflected the wetland status of the streams; the second, a posteriori classification, used a bio-environmental analysis to select classification variables. Both classifications were examined by measuring classification strength and testing differences in metric values with respect to group membership. The a priori (wetland) classification strength (83.3%) was greater than the a posteriori (bio-environmental) classification strength (76.8%). Both classifications found one metric that had significant differences between groups. The original index was modified to reflect the wetland classification by re-calibrating the scoring criteria for percent Crustacea and Mollusca. A proposed refinement to the original Benthic Community Index is suggested. This study shows the importance of using hypothesis-driven classifications, as well as exploratory statistical analysis, to evaluate alternative ways to reveal environmental variability in biological assessment tools.

  13. Deformation quantizations with separation of variables on a Kähler manifold

    NASA Astrophysics Data System (ADS)

    Karabegov, Alexander V.

    1996-10-01

    We give a simple geometric description of all formal differentiable deformation quantizations on a Kähler manifold M such that for each open subset U⊂ M ⋆-multiplication from the left by a holomorphic function and from the right by an antiholomorphic function on U coincides with the pointwise multiplication by these functions. We show that these quantizations are in 1-1 correspondence with the formal deformations of the original Kähler metrics on M.

  14. More powerful haplotype sharing by accounting for the mode of inheritance.

    PubMed

    Ziegler, Andreas; Ewhida, Adel; Brendel, Michael; Kleensang, André

    2009-04-01

    The concept of haplotype sharing (HS) has received considerable attention recently, and several haplotype association methods have been proposed. Here, we extend the work of Beckmann and colleagues [2005 Hum. Hered. 59:67-78] who derived an HS statistic (BHS) as special case of Mantel's space-time clustering approach. The Mantel-type HS statistic correlates genetic similarity with phenotypic similarity across pairs of individuals. While phenotypic similarity is measured as the mean-corrected cross product of phenotypes, we propose to incorporate information of the underlying genetic model in the measurement of the genetic similarity. Specifically, for the recessive and dominant modes of inheritance we suggest the use of the minimum and maximum of shared length of haplotypes around a marker locus for pairs of individuals. If the underlying genetic model is unknown, we propose a model-free HS Mantel statistic using the max-test approach. We compare our novel HS statistics to BHS using simulated case-control data and illustrate its use by re-analyzing data from a candidate region of chromosome 18q from the Rheumatoid Arthritis (RA) Consortium. We demonstrate that our approach is point-wise valid and superior to BHS. In the re-analysis of the RA data, we identified three regions with point-wise P-values<0.005 containing six known genes (PMIP1, MC4R, PIGN, KIAA1468, TNFRSF11A and ZCCHC2) which might be worth follow-up.

  15. Ground-based remote sensing of tropospheric water vapour isotopologues within the project MUSICA

    NASA Astrophysics Data System (ADS)

    Schneider, M.; Barthlott, S.; Hase, F.; González, Y.; Yoshimura, K.; García, O. E.; Sepúlveda, E.; Gomez-Pelaez, A.; Gisi, M.; Kohlhepp, R.; Dohe, S.; Blumenstock, T.; Wiegele, A.; Christner, E.; Strong, K.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Demoulin, P.; Jones, N.; Griffith, D. W. T.; Smale, D.; Robinson, J.

    2012-12-01

    Within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water), long-term tropospheric water vapour isotopologue data records are provided for ten globally distributed ground-based mid-infrared remote sensing stations of the NDACC (Network for the Detection of Atmospheric Composition Change). We present a new method allowing for an extensive and straightforward characterisation of the complex nature of such isotopologue remote sensing datasets. We demonstrate that the MUSICA humidity profiles are representative for most of the troposphere with a vertical resolution ranging from about 2 km (in the lower troposphere) to 8 km (in the upper troposphere) and with an estimated precision of better than 10%. We find that the sensitivity with respect to the isotopologue composition is limited to the lower and middle troposphere, whereby we estimate a precision of about 30‰ for the ratio between the two isotopologues HD16O and H216O. The measurement noise, the applied atmospheric temperature profiles, the uncertainty in the spectral baseline, and the cross-dependence on humidity are the leading error sources. We introduce an a posteriori correction method of the cross-dependence on humidity, and we recommend applying it to isotopologue ratio remote sensing datasets in general. In addition, we present mid-infrared CO2 retrievals and use them for demonstrating the MUSICA network-wide data consistency. In order to indicate the potential of long-term isotopologue remote sensing data if provided with a well-documented quality, we present a climatology and compare it to simulations of an isotope incorporated AGCM (Atmospheric General Circulation Model). We identify differences in the multi-year mean and seasonal cycles that significantly exceed the estimated errors, thereby indicating deficits in the modeled atmospheric water cycle.

  16. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  17. A Modulated-Gradient Parametrization for the Large-Eddy Simulation of the Atmospheric Boundary Layer Using the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Khani, Sina; Porté-Agel, Fernando

    2017-12-01

    The performance of the modulated-gradient subgrid-scale (SGS) model is investigated using large-eddy simulation (LES) of the neutral atmospheric boundary layer within the weather research and forecasting model. Since the model includes a finite-difference scheme for spatial derivatives, the discretization errors may affect the simulation results. We focus here on understanding the effects of finite-difference schemes on the momentum balance and the mean velocity distribution, and the requirement (or not) of the ad hoc canopy model. We find that, unlike the Smagorinsky and turbulent kinetic energy (TKE) models, the calculated mean velocity and vertical shear using the modulated-gradient model, are in good agreement with Monin-Obukhov similarity theory, without the need for an extra near-wall canopy model. The structure of the near-wall turbulent eddies is better resolved using the modulated-gradient model in comparison with the classical Smagorinsky and TKE models, which are too dissipative and yield unrealistic smoothing of the smallest resolved scales. Moreover, the SGS fluxes obtained from the modulated-gradient model are much smaller near the wall in comparison with those obtained from the regular Smagorinsky and TKE models. The apparent inability of the LES model in reproducing the mean streamwise component of the momentum balance using the total (resolved plus SGS) stress near the surface is probably due to the effect of the discretization errors, which can be calculated a posteriori using the Taylor-series expansion of the resolved velocity field. Overall, we demonstrate that the modulated-gradient model is less dissipative and yields more accurate results in comparison with the classical Smagorinsky model, with similar computational costs.

  18. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  19. Special Issue: Tenth International Conference on Finite Elements in Fluids, Tucson, Arizona.Copyright © 1999 John Wiley & Sons, Ltd.Save Title to My Profile

    E-MailPrint

    Volume 31, Issue 1, Pages 1-406(15 September 1999)

    Preface

    Preface

    NASA Astrophysics Data System (ADS)

    Oden, J. T.; Prudhomme, S.

    1999-09-01

    We present a new approach to deliver reliable approximations of the norm of the residuals resulting from finite element solutions to the Stokes and Oseen equations. The method is based upon a global solve in a bubble space using iterative techniques. This provides an alternative to the classical equilibrated element residual methods for which it is necessary to construct proper boundary conditions for each local problem. The method is first used to develop a global a posteriori error estimator. It is then applied in a strategy to control the numerical error in specific outputs or quantities of interest which are functions of the solutions to the Stokes and Oseen equations. Copyright

  20. Using Symbolic-Logic Matrices To Improve Confirmatory Factor Analysis Techniques.

    ERIC Educational Resources Information Center

    Creighton, Theodore B.; Coleman, Donald G.; Adams, R. C.

    A continuing and vexing problem associated with survey instrument development is the creation of items, initially, that correlate favorably a posteriori with constructs being measured. This study tests the use of symbolic-logic matrices developed by D. G. Coleman (1979) in creating factorially "pure" statistically discrete constructs in…

  1. Education, Markets and the Pedagogy of Personalisation

    ERIC Educational Resources Information Center

    Hartley, David

    2008-01-01

    The marketisation of education in England began in the 1980s. It was facilitated by national testing (which gave objective and comparable information to parents), and by the New Public Management (which introduced a posteriori funding and competition among providers). Now a new complementary phase of marketisation is being introduced:…

  2. Mind Your p's and Alphas.

    ERIC Educational Resources Information Center

    Stallings, William M.

    In the educational research literature alpha, the a priori level of significance, and p, the a posteriori probability of obtaining a test statistic of at least a certain value when the null hypothesis is true, are often confused. Explanations for this confusion are offered. Paradoxically, alpha retains a prominent place in textbook discussions of…

  3. Revisiting the “a posteriori” granddaughter design

    USDA-ARS?s Scientific Manuscript database

    An updated search for quantitative trait loci (QTLs) in the Holstein genome was conducted using the a posteriori granddaughter design. The number of Holstein sires with 100 or more genotyped and progeny-tested sons has increased from the previous 52 to 71 for a total of 14,246 sons. The bovine genom...

  4. Seismic Discrimination of Earthquakes and Explosions, with Application to the Southwestern United States

    DTIC Science & Technology

    1979-03-22

    multi-station discriminants than by those based on network averages. In spite of this situ - ation, average a posteriori probabilities were sometimes...Technology, Pasadena, California. Allen, C. R., L. T. Silver, and F. G. Stehi (1960). Agua Blanca fault - a major transverse structure of northern Baja

  5. Using the Pearson Distribution for Synthesis of the Suboptimal Algorithms for Filtering Multi-Dimensional Markov Processes

    NASA Astrophysics Data System (ADS)

    Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.

    2015-08-01

    We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.

  6. Lexical Diversity in Writing and Speaking Task Performances

    ERIC Educational Resources Information Center

    Yu, Guoxing

    2010-01-01

    In the rating scales of major international language tests, as well as in automated evaluation systems (e.g. e-rater), a positive relationship is often claimed between lexical diversity, holistic quality of written or spoken discourses, and language proficiency of candidates. This article reports a "posteriori" validation study that analysed a…

  7. A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission

    NASA Technical Reports Server (NTRS)

    Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.

    2014-01-01

    Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.

  8. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2017-06-30

    along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure

  9. Robust Controller Design: A Bounded-Input-Bounded-Output Worst-Case Approach

    DTIC Science & Technology

    1992-03-01

    show that 2 implies 1, suppose 1 does not hold, i.e., that p(M) > 1. The Perron - Frobenius theory for nonnegative matrices states that p(M) is itself an...Pz denote the positive cones inside X, Z consisting of elements with nonnegative pointwise components. Define the operator .4 : X -* Z, decomposed...topology.) The dual cone P! again consists of the nonnegative elements in Z*. The Lagrangian can be defined as L(x,z ’) {< x,c" > + < Ax - b,z

  10. Evaluation of two methods for using MR information in PET reconstruction

    NASA Astrophysics Data System (ADS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-02-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.

  11. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  12. Performance Enhancement of MC-CDMA System through Novel Sensitive Bit Algorithm Aided Turbo Multi User Detection

    PubMed Central

    Kumaravel, Rasadurai; Narayanaswamy, Kumaratharan

    2015-01-01

    Multi carrier code division multiple access (MC-CDMA) system is a promising multi carrier modulation (MCM) technique for high data rate wireless communication over frequency selective fading channels. MC-CDMA system is a combination of code division multiple access (CDMA) and orthogonal frequency division multiplexing (OFDM). The OFDM parts reduce multipath fading and inter symbol interference (ISI) and the CDMA part increases spectrum utilization. Advantages of this technique are its robustness in case of multipath propagation and improve security with the minimize ISI. Nevertheless, due to the loss of orthogonality at the receiver in a mobile environment, the multiple access interference (MAI) appears. The MAI is one of the factors that degrade the bit error rate (BER) performance of MC-CDMA system. The multiuser detection (MUD) and turbo coding are the two dominant techniques for enhancing the performance of the MC-CDMA systems in terms of BER as a solution of overcome to MAI effects. In this paper a low complexity iterative soft sensitive bits algorithm (SBA) aided logarithmic-Maximum a-Posteriori algorithm (Log MAP) based turbo MUD is proposed. Simulation results show that the proposed method provides better BER performance with low complexity decoding, by mitigating the detrimental effects of MAI. PMID:25714917

  13. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  14. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  15. Multiple-hypothesis multiple-model line tracking

    NASA Astrophysics Data System (ADS)

    Pace, Donald W.; Owen, Mark W.; Cox, Henry

    2000-07-01

    Passive sonar signal processing generally includes tracking of narrowband and/or broadband signature components observed on a Lofargram or on a Bearing-Time-Record (BTR) display. Fielded line tracking approaches to date have been recursive and single-hypthesis-oriented Kalman- or alpha-beta filters, with no mechanism for considering tracking alternatives beyond the most recent scan of measurements. While adaptivity is often built into the filter to handle changing track dynamics, these approaches are still extensions of single target tracking solutions to multiple target tracking environment. This paper describes an application of multiple-hypothesis, multiple target tracking technology to the sonar line tracking problem. A Multiple Hypothesis Line Tracker (MHLT) is developed which retains the recursive minimum-mean-square-error tracking behavior of a Kalman Filter in a maximum-a-posteriori delayed-decision multiple hypothesis context. Multiple line track filter states are developed and maintained using the interacting multiple model (IMM) state representation. Further, the data association and assignment problem is enhanced by considering line attribute information (line bandwidth and SNR) in addition to beam/bearing and frequency fit. MHLT results on real sonar data are presented to demonstrate the benefits of the multiple hypothesis approach. The utility of the system in cluttered environments and particularly in crossing line situations is shown.

  16. Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties.

    PubMed

    Ohyu, Shigeharu; Okamoto, Yoshiwo; Kuriki, Shinya

    2002-06-01

    A novel magnetocardiographic inverse method for reconstructing the action potential amplitude (APA) and the activation time (AT) on the ventricular myocardium is proposed. This method is based on the propagated excitation model, in which the excitation is propagated through the ventricle with nonuniform height of action potential. Assumption of stepwise waveform on the transmembrane potential was introduced in the model. Spatial gradient of transmembrane potential, which is defined by APA and AT distributed in the ventricular wall, is used for the computation of a current source distribution. Based on this source model, the distributions of APA and AT are inversely reconstructed from the QRS interval of magnetocardiogram (MCG) utilizing a maximum a posteriori approach. The proposed reconstruction method was tested through computer simulations. Stability of the methods with respect to measurement noise was demonstrated. When reference APA was provided as a uniform distribution, root-mean-square errors of estimated APA were below 10 mV for MCG signal-to-noise ratios greater than, or equal to, 20 dB. Low-amplitude regions located at several sites in reference APA distributions were correctly reproduced in reconstructed APA distributions. The goal of our study is to develop a method for detecting myocardial ischemia through the depression of reconstructed APA distributions.

  17. Jacobian projection reduced-order models for dynamic systems with contact nonlinearities

    NASA Astrophysics Data System (ADS)

    Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.

    2018-02-01

    In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.

  18. Comparison of Grouping Schemes for Exposure to Total Dust in Cement Factories in Korea.

    PubMed

    Koh, Dong-Hee; Kim, Tae-Woo; Jang, Seung Hee; Ryu, Hyang-Woo; Park, Donguk

    2015-08-01

    The purpose of this study was to evaluate grouping schemes for exposure to total dust in cement industry workers using non-repeated measurement data. In total, 2370 total dust measurements taken from nine Portland cement factories in 1995-2009 were analyzed. Various grouping schemes were generated based on work process, job, factory, or average exposure. To characterize variance components of each grouping scheme, we developed mixed-effects models with a B-spline time trend incorporated as fixed effects and a grouping variable incorporated as a random effect. Using the estimated variance components, elasticity was calculated. To compare the prediction performances of different grouping schemes, 10-fold cross-validation tests were conducted, and root mean squared errors and pooled correlation coefficients were calculated for each grouping scheme. The five exposure groups created a posteriori by ranking job and factory combinations according to average dust exposure showed the best prediction performance and highest elasticity among various grouping schemes. Our findings suggest a grouping method based on ranking of job, and factory combinations would be the optimal choice in this population. Our grouping method may aid exposure assessment efforts in similar occupational settings, minimizing the misclassification of exposures. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  19. Bayesian Orbit Computation Tools for Objects on Geocentric Orbits

    NASA Astrophysics Data System (ADS)

    Virtanen, J.; Granvik, M.; Muinonen, K.; Oszkiewicz, D.

    2013-08-01

    We consider the space-debris orbital inversion problem via the concept of Bayesian inference. The methodology has been put forward for the orbital analysis of solar system small bodies in early 1990's [7] and results in a full solution of the statistical inverse problem given in terms of a posteriori probability density function (PDF) for the orbital parameters. We demonstrate the applicability of our statistical orbital analysis software to Earth orbiting objects, both using well-established Monte Carlo (MC) techniques (for a review, see e.g. [13] as well as recently developed Markov-chain MC (MCMC) techniques (e.g., [9]). In particular, we exploit the novel virtual observation MCMC method [8], which is based on the characterization of the phase-space volume of orbital solutions before the actual MCMC sampling. Our statistical methods and the resulting PDFs immediately enable probabilistic impact predictions to be carried out. Furthermore, this can be readily done also for very sparse data sets and data sets of poor quality - providing that some a priori information on the observational uncertainty is available. For asteroids, impact probabilities with the Earth from the discovery night onwards have been provided, e.g., by [11] and [10], the latter study includes the sampling of the observational-error standard deviation as a random variable.

  20. BAYESIAN PROTEIN STRUCTURE ALIGNMENT.

    PubMed

    Rodriguez, Abel; Schmidler, Scott C

    The analysis of the three-dimensional structure of proteins is an important topic in molecular biochemistry. Structure plays a critical role in defining the function of proteins and is more strongly conserved than amino acid sequence over evolutionary timescales. A key challenge is the identification and evaluation of structural similarity between proteins; such analysis can aid in understanding the role of newly discovered proteins and help elucidate evolutionary relationships between organisms. Computational biologists have developed many clever algorithmic techniques for comparing protein structures, however, all are based on heuristic optimization criteria, making statistical interpretation somewhat difficult. Here we present a fully probabilistic framework for pairwise structural alignment of proteins. Our approach has several advantages, including the ability to capture alignment uncertainty and to estimate key "gap" parameters which critically affect the quality of the alignment. We show that several existing alignment methods arise as maximum a posteriori estimates under specific choices of prior distributions and error models. Our probabilistic framework is also easily extended to incorporate additional information, which we demonstrate by including primary sequence information to generate simultaneous sequence-structure alignments that can resolve ambiguities obtained using structure alone. This combined model also provides a natural approach for the difficult task of estimating evolutionary distance based on structural alignments. The model is illustrated by comparison with well-established methods on several challenging protein alignment examples.

  1. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  2. A reconstruction method of intra-ventricular blood flow using color flow ultrasound: a simulation study

    NASA Astrophysics Data System (ADS)

    Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun

    2015-03-01

    A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.

  3. Automatic selection of optimal Savitzky-Golay filter parameters for Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-01-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.

  4. Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach

    NASA Astrophysics Data System (ADS)

    Yan, Wen; Shelley, Michael

    2018-02-01

    An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.

  5. Planned Comparisons as Better Alternatives to ANOVA Omnibus Tests.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Analyses of data are presented to illustrate the advantages of using a priori or planned comparisons rather than omnibus analysis of variance (ANOVA) tests followed by post hoc or posteriori testing. The two types of planned comparisons considered are planned orthogonal non-trend coding contrasts and orthogonal polynomial or trend contrast coding.…

  6. How important is self-consistency for the dDsC density dependent dispersion correction?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brémond, Éric; Corminboeuf, Clémence, E-mail: clemence.corminboeuf@epfl.ch; Golubev, Nikolay

    2014-05-14

    The treatment of dispersion interactions is ubiquitous but computationally demanding for seamless ab initio approaches. A highly popular and simple remedy consists in correcting for the missing interactions a posteriori by adding an attractive energy term summed over all atom pairs to standard density functional approximations. These corrections were originally based on atom pairwise parameters and, hence, had a strong touch of empiricism. To overcome such limitations, we recently proposed a robust system-dependent dispersion correction, dDsC, that is computed from the electron density and that provides a balanced description of both weak inter- and intramolecular interactions. From the theoretical pointmore » of view and for the sake of increasing reliability, we here verify if the self-consistent implementation of dDsC impacts ground-state properties such as interaction energies, electron density, dipole moments, geometries, and harmonic frequencies. In addition, we investigate the suitability of the a posteriori scheme for molecular dynamics simulations, for which the analysis of the energy conservation constitutes a challenging tests. Our study demonstrates that the post-SCF approach in an excellent approximation.« less

  7. A priori and a posteriori analysis of the flow around a rectangular cylinder

    NASA Astrophysics Data System (ADS)

    Cimarelli, A.; Leonforte, A.; Franciolini, M.; De Angelis, E.; Angeli, D.; Crivellini, A.

    2017-11-01

    The definition of a correct mesh resolution and modelling approach for the Large Eddy Simulation (LES) of the flow around a rectangular cylinder is recognized to be a rather elusive problem as shown by the large scatter of LES results present in the literature. In the present work, we aim at assessing this issue by performing an a priori analysis of Direct Numerical Simulation (DNS) data of the flow. This approach allows us to measure the ability of the LES field on reproducing the main flow features as a function of the resolution employed. Based on these results, we define a mesh resolution which maximize the opposite needs of reducing the computational costs and of adequately resolving the flow dynamics. The effectiveness of the resolution method proposed is then verified by means of an a posteriori analysis of actual LES data obtained by means of the implicit LES approach given by the numerical properties of the Discontinuous Galerkin spatial discretization technique. The present work represents a first step towards a best practice for LES of separating and reattaching flows.

  8. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    NASA Astrophysics Data System (ADS)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  9. Dynamic Sensing Performance of a Point-Wise Fiber Bragg Grating Displacement Measurement System Integrated in an Active Structural Control System

    PubMed Central

    Chuang, Kuo-Chih; Liao, Heng-Tseng; Ma, Chien-Ching

    2011-01-01

    In this work, a fiber Bragg grating (FBG) sensing system which can measure the transient response of out-of-plane point-wise displacement responses is set up on a smart cantilever beam and the feasibility of its use as a feedback sensor in an active structural control system is studied experimentally. An FBG filter is employed in the proposed fiber sensing system to dynamically demodulate the responses obtained by the FBG displacement sensor with high sensitivity. For comparison, a laser Doppler vibrometer (LDV) is utilized simultaneously to verify displacement detection ability of the FBG sensing system. An optical full-field measurement technique called amplitude-fluctuation electronic speckle pattern interferometry (AF-ESPI) is used to provide full-field vibration mode shapes and resonant frequencies. To verify the dynamic demodulation performance of the FBG filter, a traditional FBG strain sensor calibrated with a strain gauge is first employed to measure the dynamic strain of impact-induced vibrations. Then, system identification of the smart cantilever beam is performed by FBG strain and displacement sensors. Finally, by employing a velocity feedback control algorithm, the feasibility of integrating the proposed FBG displacement sensing system in a collocated feedback system is investigated and excellent dynamic feedback performance is demonstrated. In conclusion, our experiments show that the FBG sensor is capable of performing dynamic displacement feedback and/or strain measurements with high sensitivity and resolution. PMID:22247683

  10. Automatic Railway Traffic Object Detection System Using Feature Fusion Refine Neural Network under Shunting Mode.

    PubMed

    Ye, Tao; Wang, Baocheng; Song, Ping; Li, Juan

    2018-06-12

    Many accidents happen under shunting mode when the speed of a train is below 45 km/h. In this mode, train attendants observe the railway condition ahead using the traditional manual method and tell the observation results to the driver in order to avoid danger. To address this problem, an automatic object detection system based on convolutional neural network (CNN) is proposed to detect objects ahead in shunting mode, which is called Feature Fusion Refine neural network (FR-Net). It consists of three connected modules, i.e., the depthwise-pointwise convolution, the coarse detection module, and the object detection module. Depth-wise-pointwise convolutions are used to improve the detection in real time. The coarse detection module coarsely refine the locations and sizes of prior anchors to provide better initialization for the subsequent module and also reduces search space for the classification, whereas the object detection module aims to regress accurate object locations and predict the class labels for the prior anchors. The experimental results on the railway traffic dataset show that FR-Net achieves 0.8953 mAP with 72.3 FPS performance on a machine with a GeForce GTX1080Ti with the input size of 320 × 320 pixels. The results imply that FR-Net takes a good tradeoff both on effectiveness and real time performance. The proposed method can meet the needs of practical application in shunting mode.

  11. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Magome, T; Nakano, M

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less

  12. Evaluating a 3-D transport model of atmospheric CO2 using ground-based, aircraft, and space-borne data

    NASA Astrophysics Data System (ADS)

    Feng, L.; Palmer, P. I.; Yang, Y.; Yantosca, R. M.; Kawa, S. R.; Paris, J.-D.; Matsueda, H.; Machida, T.

    2010-07-01

    We evaluate the GEOS-Chem atmospheric transport model (v8-02-01) of CO2 over 2003-2006, driven by GEOS-4 and GEOS-5 meteorology from the NASA Goddard Global Modelling and Assimilation Office, using surface, aircraft and space-borne concentration measurements of CO2. We use an established ensemble Kalman filter to estimate a posteriori biospheric+biomass burning (BS+BB) and oceanic (OC) CO2 fluxes from 22 geographical regions, following the TransCom 3 protocol, using boundary layer CO2 data from a subset of GLOBALVIEW surface sites. Global annual net BS+BB+OC CO2 fluxes over 2004-2006 for GEOS-4 (GEOS-5) meteorology are -4.4±0.9 (-4.2±0.9), -3.9±0.9 (-4.5±0.9), and -5.2±0.9 (-4.9±0.9) Pg C yr-1 , respectively. The regional a posteriori fluxes are broadly consistent in the sign and magnitude of the TransCom-3 study for 1992-1996, but we find larger net sinks over northern and southern continents. We find large departures from our a priori over Europe during summer 2003, over temperate Eurasia during 2004, and over North America during 2005, reflecting an incomplete description of terrestrial carbon dynamics. We find GEOS-4 (GEOS-5) a posteriori CO2 concentrations reproduce the observed surface trend of 1.91-2.43 ppm yr-1, depending on latitude, within 0.15 ppm yr-1 (0.2 ppm yr-1) and the seasonal cycle within 0.2 ppm (0.2 ppm) at all latitudes. We find the a posteriori model reproduces the aircraft vertical profile measurements of CO2 over North America and Siberia generally within 1.5 ppm in the free and upper troposphere but can be biased by up to 4-5 ppm in the boundary layer at the start and end of the growing season. The model has a small negative bias in the free troposphere CO2 trend (1.95-2.19 ppm yr-1) compared to AIRS data which has a trend of 2.21-2.63 ppm yr-1 during 2004-2006, consistent with surface data. Model CO2 concentrations in the upper troposphere, evaluated using CONTRAIL (Comprehensive Observation Network for TRace gases by AIrLiner) aircraft measurements, reproduce the magnitude and phase of the seasonal cycle of CO2 in both hemispheres. We generally find that the GEOS meteorology reproduces much of the observed tropospheric CO2 variability, suggesting that these meteorological fields will help make significant progress in understanding carbon fluxes as more data become available.

  13. VizieR Online Data Catalog: Stellar surface gravity measures of KIC stars (Bastien+, 2016)

    NASA Astrophysics Data System (ADS)

    Bastien, F. A.; Stassun, K. G.; Basri, G.; Pepper, J.

    2016-04-01

    In our analysis we use all quarters from the Kepler mission except for Q0, and we only use the long-cadence light curves. Additionally, we only use the Pre-search Data Conditioning, Maximum A Posteriori (PDC-MAP) light curves, as further discussed in Section 3.4.1. (1 data file).

  14. The Advantages of Using Planned Comparisons over Post Hoc Tests.

    ERIC Educational Resources Information Center

    Kuehne, Carolyn C.

    There are advantages to using a priori or planned comparisons rather than omnibus multivariate analysis of variance (MANOVA) tests followed by post hoc or a posteriori testing. A small heuristic data set is used to illustrate these advantages. An omnibus MANOVA test was performed on the data followed by a post hoc test (discriminant analysis). A…

  15. A reconstruction algorithm for helical CT imaging on PI-planes.

    PubMed

    Liang, Hongzhu; Zhang, Cishen; Yan, Ming

    2006-01-01

    In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.

  16. Beyond Worst-Case Analysis in Privacy and Clustering: Exploiting Explicit and Implicit Assumptions

    DTIC Science & Technology

    2013-08-01

    Dwork et al [63]. Given a query function f , the curator first estimates the global sensitivity of f , denoted GS(f) = maxD,D′ f(D)− f(D′), then outputs f...Ostrovsky et al [121]. Ostrovsky et al study instances in which the ratio between the cost of the optimal (k − 1)-means solu- tion and the cost of the...k-median objective. We also build on the work of Balcan et al [25] that investigate the connection between point-wise approximations of the target

  17. Engineering Design Handbook. Maintainability Engineering Theory and Practice

    DTIC Science & Technology

    1976-01-01

    5—46 5—8.4.1.1 Human Body Measurement ( Anthropometry ) . 5—46 5-8.4.1.2 Man’s Sensory Capability and Psychological Makeup 5-46 5—8.4.1.3...Availability of System With Maintenance Time Ratio 1:4 2-32 2—9 Average and Pointwise Availability 2—34 2—10 Hypothetical...density function ( pdf ) of the normal distribution (Ref. 22, Chapter 10, and Ref. 23, Chapter 1) has the equation where cr is the standard deviation of

  18. Vibration suppression with approximate finite dimensional compensators for distributed systems: Computational methods and experimental results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.

  19. The τq-Fourier transform: Covariance and uniqueness

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Nikolaos

    2018-05-01

    We propose an alternative definition for a Tsallis entropy composition-inspired Fourier transform, which we call “τq-Fourier transform”. We comment about the underlying “covariance” on the set of algebraic fields that motivates its introduction. We see that the definition of the τq-Fourier transform is automatically invertible in the proper context. Based on recent results in Fourier analysis, it turns that the τq-Fourier transform is essentially unique under the assumption of the exchange of the point-wise product of functions with their convolution.

  20. Necessary optimality conditions for infinite dimensional state constrained control problems

    NASA Astrophysics Data System (ADS)

    Frankowska, H.; Marchini, E. M.; Mazzola, M.

    2018-06-01

    This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.

  1. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  2. Computational Aerodynamic Analysis of Three-Dimensional Ice Shapes on a NACA 23012 Airfoil

    NASA Technical Reports Server (NTRS)

    Jun, GaRam; Oliden, Daniel; Potapczuk, Mark G.; Tsao, Jen-Ching

    2014-01-01

    The present study identifies a process for performing computational fluid dynamic calculations of the flow over full three-dimensional (3D) representations of complex ice shapes deposited on aircraft surfaces. Rime and glaze icing geometries formed on a NACA23012 airfoil were obtained during testing in the NASA Glenn Research Centers Icing Research Tunnel (IRT). The ice shape geometries were scanned as a cloud of data points using a 3D laser scanner. The data point clouds were meshed using Geomagic software to create highly accurate models of the ice surface. The surface data was imported into Pointwise grid generation software to create the CFD surface and volume grids. It was determined that generating grids in Pointwise for complex 3D icing geometries was possible using various techniques that depended on the ice shape. Computations of the flow fields over these ice shapes were performed using the NASA National Combustion Code (NCC). Results for a rime ice shape for angle of attack conditions ranging from 0 to 10 degrees and for freestream Mach numbers of 0.10 and 0.18 are presented. For validation of the computational results, comparisons were made to test results from rapid-prototype models of the selected ice accretion shapes, obtained from a separate study in a subsonic wind tunnel at the University of Illinois at Urbana-Champaign. The computational and experimental results were compared for values of pressure coefficient and lift. Initial results show fairly good agreement for rime ice accretion simulations across the range of conditions examined. The glaze ice results are promising but require some further examination.

  3. Golay Complementary Waveforms in Reed–Müller Sequences for Radar Detection of Nonzero Doppler Targets

    PubMed Central

    Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill

    2018-01-01

    Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708

  4. Computational Aerodynamic Analysis of Three-Dimensional Ice Shapes on a NACA 23012 Airfoil

    NASA Technical Reports Server (NTRS)

    Jun, Garam; Oliden, Daniel; Potapczuk, Mark G.; Tsao, Jen-Ching

    2014-01-01

    The present study identifies a process for performing computational fluid dynamic calculations of the flow over full three-dimensional (3D) representations of complex ice shapes deposited on aircraft surfaces. Rime and glaze icing geometries formed on a NACA23012 airfoil were obtained during testing in the NASA Glenn Research Center's Icing Research Tunnel (IRT). The ice shape geometries were scanned as a cloud of data points using a 3D laser scanner. The data point clouds were meshed using Geomagic software to create highly accurate models of the ice surface. The surface data was imported into Pointwise grid generation software to create the CFD surface and volume grids. It was determined that generating grids in Pointwise for complex 3D icing geometries was possible using various techniques that depended on the ice shape. Computations of the flow fields over these ice shapes were performed using the NASA National Combustion Code (NCC). Results for a rime ice shape for angle of attack conditions ranging from 0 to 10 degrees and for freestream Mach numbers of 0.10 and 0.18 are presented. For validation of the computational results, comparisons were made to test results from rapid-prototype models of the selected ice accretion shapes, obtained from a separate study in a subsonic wind tunnel at the University of Illinois at Urbana-Champaign. The computational and experimental results were compared for values of pressure coefficient and lift. Initial results show fairly good agreement for rime ice accretion simulations across the range of conditions examined. The glaze ice results are promising but require some further examination.

  5. Walking Ahead: The Headed Social Force Model.

    PubMed

    Farina, Francesco; Fontanelli, Daniele; Garulli, Andrea; Giannitrapani, Antonio; Prattichizzo, Domenico

    2017-01-01

    Human motion models are finding an increasing number of novel applications in many different fields, such as building design, computer graphics and robot motion planning. The Social Force Model is one of the most popular alternatives to describe the motion of pedestrians. By resorting to a physical analogy, individuals are assimilated to point-wise particles subject to social forces which drive their dynamics. Such a model implicitly assumes that humans move isotropically. On the contrary, empirical evidence shows that people do have a preferred direction of motion, walking forward most of the time. Lateral motions are observed only in specific circumstances, such as when navigating in overcrowded environments or avoiding unexpected obstacles. In this paper, the Headed Social Force Model is introduced in order to improve the realism of the trajectories generated by the classical Social Force Model. The key feature of the proposed approach is the inclusion of the pedestrians' heading into the dynamic model used to describe the motion of each individual. The force and torque representing the model inputs are computed as suitable functions of the force terms resulting from the traditional Social Force Model. Moreover, a new force contribution is introduced in order to model the behavior of people walking together as a single group. The proposed model features high versatility, being able to reproduce both the unicycle-like trajectories typical of people moving in open spaces and the point-wise motion patterns occurring in high density scenarios. Extensive numerical simulations show an increased regularity of the resulting trajectories and confirm a general improvement of the model realism.

  6. A prospective study of medical students' perspective of teaching-learning media: reiterating the importance of feedback.

    PubMed

    Dhaliwal, Upreet

    2007-11-01

    To enhance successful communication, medical teachers are increasingly using teaching-learning media. To determine medical students' perception of three such media (blackboard, overhead projector, and slides), and to generate recommendations for their optimal use, a prospective questionnaire-based study was carried out among 7th semester medical students of the University College of Medical Sciences and Guru Teg Bahadur Hospital, Delhi. Students made a forced choice between: (1) The three media on 8 questions regarding their advantages and disadvantages and (2) four aspects of a lecture (teaching-learning media, topic, teacher and time of day) regarding which made the lecture most engaging. Resulting data was analysed by Chi-square and Fisher's exact tests. Chalk and blackboard was rated as best in allowing interaction and helping recall (p<0.001 each). The overhead projector was best in providing information pointwise (p<0.001; 67 students, 89.3%, considered this an advantage). More subject matter could be covered per lecture (p=0.001; 58 students, 77.3%, considered this a disadvantage). Slides were best in imparting clinical details (p=0.004), but were sleep inducing (p<0.001). The teacher's style of instruction was most important in making the lecture engaging (p<0.001). The teacher's role in the learning process is important. Students enjoy the slow pace and interaction allowed by blackboard, pointwise information presented by the overhead projector, and the clinical details a slide can provide. The results suggest that the lecture could best be a combination of two or more teaching-learning media. Students' interaction should be encouraged whatever the media used.

  7. Using the Coefficient of Confidence to Make the Philosophical Switch from a Posteriori to a Priori Inferential Statistics

    ERIC Educational Resources Information Center

    Trafimow, David

    2017-01-01

    There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…

  8. An analysis of the multiple model adaptive control algorithm. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, C. S.

    1978-01-01

    Qualitative and quantitative aspects of the multiple model adaptive control method are detailed. The method represents a cascade of something which resembles a maximum a posteriori probability identifier (basically a bank of Kalman filters) and a bank of linear quadratic regulators. Major qualitative properties of the MMAC method are examined and principle reasons for unacceptable behavior are explored.

  9. Accuracy and Variability of Item Parameter Estimates from Marginal Maximum a Posteriori Estimation and Bayesian Inference via Gibbs Samplers

    ERIC Educational Resources Information Center

    Wu, Yi-Fang

    2015-01-01

    Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…

  10. Software requirements for the study of contextual classifiers and label imperfections

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The software requirements for the study of contextual classifiers and imperfections in the labels are presented. In particular, the requirements are described for updating the posteriori probability of the picture element under consideration using information from its local neighborhood, designing the Fisher classifier, and other required routines. Only the necessary equations are given for the development of software.

  11. Validating Affordances as an Instrument for Design and a Priori Analysis of Didactical Situations in Mathematics

    ERIC Educational Resources Information Center

    Sollervall, Håkan; Stadler, Erika

    2015-01-01

    The aim of the presented case study is to investigate how coherent analytical instruments may guide the a priori and a posteriori analyses of a didactical situation. In the a priori analysis we draw on the notion of affordances, as artefact-mediated opportunities for action, to construct hypothetical trajectories of goal-oriented actions that have…

  12. Graph edit distance from spectral seriation.

    PubMed

    Robles-Kelly, Antonio; Hancock, Edwin R

    2005-03-01

    This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.

  13. Maximum a posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data

    NASA Technical Reports Server (NTRS)

    Rignot, E.; Chellappa, R.

    1993-01-01

    We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

  14. Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation.

    PubMed

    Marcel, Sébastien; Millán, José Del R

    2007-04-01

    In this paper, we investigate the use of brain activity for person authentication. It has been shown in previous studies that the brain-wave pattern of every individual is unique and that the electroencephalogram (EEG) can be used for biometric identification. EEG-based biometry is an emerging research topic and we believe that it may open new research directions and applications in the future. However, very little work has been done in this area and was focusing mainly on person identification but not on person authentication. Person authentication aims to accept or to reject a person claiming an identity, i.e., comparing a biometric data to one template, while the goal of person identification is to match the biometric data against all the records in a database. We propose the use of a statistical framework based on Gaussian Mixture Models and Maximum A Posteriori model adaptation, successfully applied to speaker and face authentication, which can deal with only one training session. We perform intensive experimental simulations using several strict train/test protocols to show the potential of our method. We also show that there are some mental tasks that are more appropriate for person authentication than others.

  15. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  16. Information criteria for quantifying loss of reversibility in parallelized KMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less

  17. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  18. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  19. Protein-ligand binding free energy estimation using molecular mechanics and continuum electrostatics. Application to HIV-1 protease inhibitors

    NASA Astrophysics Data System (ADS)

    Zoete, V.; Michielin, O.; Karplus, M.

    2003-12-01

    A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SAS bur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, ΔGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC 50 without reparametrization.

  20. Information criteria for quantifying loss of reversibility in parallelized KMC

    NASA Astrophysics Data System (ADS)

    Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  1. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  2. Finite Volume Methods: Foundation and Analysis

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Ohlberger, Mario

    2003-01-01

    Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.

  3. Constraining the Sulfur Dioxide Degassing Flux from Turrialba Volcano, Costa Rica Using Unmanned Aerial System Measurements

    NASA Technical Reports Server (NTRS)

    Xi, Xin; Johnson, Matthew S.; Jeong, Seongeun; Fladeland, Matthew; Pieri, David; Diaz, Jorge Andres; Bland, Geoffrey L.

    2016-01-01

    Observed sulfur dioxide (SO2)mixing ratios onboard unmanned aerial systems (UAS) duringMarch 11-13, 2013 are used to constrain the three-day averaged SO2 degassing flux fromTurrialba volcanowithin a Bayesian inverse modeling framework. A mesoscale model coupled with Lagrangian stochastic particle backward trajectories is used to quantify the source-receptor relationships at very high spatial resolutions (i.e., b1 km). The model shows better performance in reproducing the near-surface meteorological properties and observed SO2 variations when using a first-order closure non-local planetary boundary layer (PBL) scheme. The optimized SO2 degassing fluxes vary from 0.59 +/- 0.37 to 0.83 +/- 0.33 kt d-1 depending on the PBL scheme used. These fluxes are in good agreement with ground-based gas flux measurements, and correspond to corrective scale factors of 8-12 to the posteruptive SO2 degassing rate in the AeroCom emission inventory. The maximum a posteriori solution for the SO2 flux is highly sensitive to the specification of prior and observational errors, and relatively insensitive to the SO2 loss term and temporal averaging of observations. Our results indicate relatively low degassing activity but sustained sulfur emissions from Turrialba volcano to the troposphere during March 2013. This study demonstrates the utility of low-cost small UAS platforms for volcanic gas composition and flux analysis.

  4. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.

  5. Mass-conservative reconstruction of Galerkin velocity fields for transport simulations

    NASA Astrophysics Data System (ADS)

    Scudeler, C.; Putti, M.; Paniconi, C.

    2016-08-01

    Accurate calculation of mass-conservative velocity fields from numerical solutions of Richards' equation is central to reliable surface-subsurface flow and transport modeling, for example in long-term tracer simulations to determine catchment residence time distributions. In this study we assess the performance of a local Larson-Niklasson (LN) post-processing procedure for reconstructing mass-conservative velocities from a linear (P1) Galerkin finite element solution of Richards' equation. This approach, originally proposed for a-posteriori error estimation, modifies the standard finite element velocities by imposing local conservation on element patches. The resulting reconstructed flow field is characterized by continuous fluxes on element edges that can be efficiently used to drive a second order finite volume advective transport model. Through a series of tests of increasing complexity that compare results from the LN scheme to those using velocity fields derived directly from the P1 Galerkin solution, we show that a locally mass-conservative velocity field is necessary to obtain accurate transport results. We also show that the accuracy of the LN reconstruction procedure is comparable to that of the inherently conservative mixed finite element approach, taken as a reference solution, but that the LN scheme has much lower computational costs. The numerical tests examine steady and unsteady, saturated and variably saturated, and homogeneous and heterogeneous cases along with initial and boundary conditions that include dry soil infiltration, alternating solute and water injection, and seepage face outflow. Typical problems that arise with velocities derived from P1 Galerkin solutions include outgoing solute flux from no-flow boundaries, solute entrapment in zones of low hydraulic conductivity, and occurrences of anomalous sources and sinks. In addition to inducing significant mass balance errors, such manifestations often lead to oscillations in concentration values that can moreover cause the numerical solution to explode. These problems do not occur when using LN post-processed velocities.

  6. Bayesian inversion of a CRN depth profile to infer Quaternary erosion of the northwestern Campine Plateau (NE Belgium)

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Beerten, Koen; Vanacker, Veerle; Christl, Marcus; Rogiers, Bart; Wouters, Laurent

    2017-07-01

    The rate at which low-lying sandy areas in temperate regions, such as the Campine Plateau (NE Belgium), have been eroding during the Quaternary is a matter of debate. Current knowledge on the average pace of landscape evolution in the Campine area is largely based on geological inferences and modern analogies. We performed a Bayesian inversion of an in situ-produced 10Be concentration depth profile to infer the average long-term erosion rate together with two other parameters: the surface exposure age and the inherited 10Be concentration. Compared to the latest advances in probabilistic inversion of cosmogenic radionuclide (CRN) data, our approach has the following two innovative components: it (1) uses Markov chain Monte Carlo (MCMC) sampling and (2) accounts (under certain assumptions) for the contribution of model errors to posterior uncertainty. To investigate to what extent our approach differs from the state of the art in practice, a comparison against the Bayesian inversion method implemented in the CRONUScalc program is made. Both approaches identify similar maximum a posteriori (MAP) parameter values, but posterior parameter and predictive uncertainty derived using the method taken in CRONUScalc is moderately underestimated. A simple way for producing more consistent uncertainty estimates with the CRONUScalc-like method in the presence of model errors is therefore suggested. Our inferred erosion rate of 39 ± 8. 9 mm kyr-1 (1σ) is relatively large in comparison with landforms that erode under comparable (paleo-)climates elsewhere in the world. We evaluate this value in the light of the erodibility of the substrate and sudden base level lowering during the Middle Pleistocene. A denser sampling scheme of a two-nuclide concentration depth profile would allow for better inferred erosion rate resolution, and including more uncertain parameters in the MCMC inversion.

  7. Performance and precision of double digestion RAD (ddRAD) genotyping in large multiplexed datasets of marine fish species.

    PubMed

    Maroso, F; Hillen, J E J; Pardo, B G; Gkagkavouzis, K; Coscia, I; Hermida, M; Franch, R; Hellemans, B; Van Houdt, J; Simionati, B; Taggart, J B; Nielsen, E E; Maes, G; Ciavaglia, S A; Webster, L M I; Volckaert, F A M; Martinez, P; Bargelloni, L; Ogden, R

    2018-06-01

    The development of Genotyping-By-Sequencing (GBS) technologies enables cost-effective analysis of large numbers of Single Nucleotide Polymorphisms (SNPs), especially in "non-model" species. Nevertheless, as such technologies enter a mature phase, biases and errors inherent to GBS are becoming evident. Here, we evaluated the performance of double digest Restriction enzyme Associated DNA (ddRAD) sequencing in SNP genotyping studies including high number of samples. Datasets of sequence data were generated from three marine teleost species (>5500 samples, >2.5 × 10 12 bases in total), using a standardized protocol. A common bioinformatics pipeline based on STACKS was established, with and without the use of a reference genome. We performed analyses throughout the production and analysis of ddRAD data in order to explore (i) the loss of information due to heterogeneous raw read number across samples; (ii) the discrepancy between expected and observed tag length and coverage; (iii) the performances of reference based vs. de novo approaches; (iv) the sources of potential genotyping errors of the library preparation/bioinformatics protocol, by comparing technical replicates. Our results showed use of a reference genome and a posteriori genotype correction improved genotyping precision. Individual read coverage was a key variable for reproducibility; variance in sequencing depth between loci in the same individual was also identified as an important factor and found to correlate to tag length. A comparison of downstream analysis carried out with ddRAD vs single SNP allele specific assay genotypes provided information about the levels of genotyping imprecision that can have a significant impact on allele frequency estimations and population assignment. The results and insights presented here will help to select and improve approaches to the analysis of large datasets based on RAD-like methodologies. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  8. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  9. Spectroscopic properties of Arx-Zn and Arx-Ag+ (x = 1,2) van der Waals complexes

    NASA Astrophysics Data System (ADS)

    Oyedepo, Gbenga A.; Peterson, Charles; Schoendorff, George; Wilson, Angela K.

    2013-03-01

    Potential energy curves have been constructed using coupled cluster with singles, doubles, and perturbative triple excitations (CCSD(T)) in combination with all-electron and pseudopotential-based multiply augmented correlation consistent basis sets [m-aug-cc-pV(n + d)Z; m = singly, doubly, triply, n = D,T,Q,5]. The effect of basis set superposition error on the spectroscopic properties of Ar-Zn, Ar2-Zn, Ar-Ag+, and Ar2-Ag+ van der Waals complexes was examined. The diffuse functions of the doubly and triply augmented basis sets have been constructed using the even-tempered expansion. The a posteriori counterpoise scheme of Boys and Bernardi and its generalized variant by Valiron and Mayer has been utilized to correct for basis set superposition error (BSSE) in the calculated spectroscopic properties for diatomic and triatomic species. It is found that even at the extrapolated complete basis set limit for the energetic properties, the pseudopotential-based calculations still suffer from significant BSSE effects unlike the all-electron basis sets. This indicates that the quality of the approximations used in the design of pseudopotentials could have major impact on a seemingly valence-exclusive effect like BSSE. We confirm the experimentally determined equilibrium internuclear distance (re), binding energy (De), harmonic vibrational frequency (ωe), and C1Π ← X1Σ transition energy for ArZn and also predict the spectroscopic properties for the low-lying excited states of linear Ar2-Zn (X1Σg, 3Πg, 1Πg), Ar-Ag+ (X1Σ, 3Σ, 3Π, 3Δ, 1Σ, 1Π, 1Δ), and Ar2-Ag+ (X1Σg, 3Σg, 3Πg, 3Δg, 1Σg, 1Πg, 1Δg) complexes, using the CCSD(T) and MR-CISD + Q methods, to aid in their experimental characterizations.

  10. Rigidity of complete generic shrinking Ricci solitons

    NASA Astrophysics Data System (ADS)

    Chu, Yawei; Zhou, Jundong; Wang, Xue

    2018-01-01

    Let (Mn , g , X) be a complete generic shrinking Ricci soliton of dimension n ≥ 3. In this paper, by employing curvature inequalities, the formula of X-Laplacian for the norm square of the trace-free curvature tensor, the weak maximum principle and the estimate of the scalar curvature of (Mn , g) , we prove some rigidity results for (Mn , g , X) . In particular, it is showed that (Mn , g , X) is isometric to Rn or a finite quotient of Sn under a pointwise pinching condition. Moreover, we establish several optimal inequalities and classify those shrinking solitons for equalities.

  11. Low-drag events in transitional wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Whalley, Richard D.; Park, Jae Sung; Kushwaha, Anubhav; Dennis, David J. C.; Graham, Michael D.; Poole, Robert J.

    2017-03-01

    Intermittency of low-drag pointwise wall shear stress measurements within Newtonian turbulent channel flow at transitional Reynolds numbers (friction Reynolds numbers 70 - 130) is characterized using experiments and simulations. Conditional mean velocity profiles during low-drag events closely approach that of a recently discovered nonlinear traveling wave solution; both profiles are near the so-called maximum drag reduction profile, a general feature of turbulent flow of liquids containing polymer additives (despite the fact that all results presented are for Newtonian fluids only). Similarities between temporal intermittency in small domains and spatiotemporal intermittency in large domains is thereby found.

  12. Revisiting and Extending Interface Penalties for Multi-Domain Summation-by-Parts Operators

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Nordstrom, Jan; Gottlieb, David

    2007-01-01

    General interface coupling conditions are presented for multi-domain collocation methods, which satisfy the summation-by-parts (SBP) spatial discretization convention. The combined interior/interface operators are proven to be L2 stable, pointwise stable, and conservative, while maintaining the underlying accuracy of the interior SBP operator. The new interface conditions resemble (and were motivated by) those used in the discontinuous Galerkin finite element community, and maintain many of the same properties. Extensive validation studies are presented using two classes of high-order SBP operators: 1) central finite difference, and 2) Legendre spectral collocation.

  13. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  14. A fast isogeometric BEM for the three dimensional Laplace- and Helmholtz problems

    NASA Astrophysics Data System (ADS)

    Dölz, Jürgen; Harbrecht, Helmut; Kurz, Stefan; Schöps, Sebastian; Wolf, Felix

    2018-03-01

    We present an indirect higher order boundary element method utilising NURBS mappings for exact geometry representation and an interpolation-based fast multipole method for compression and reduction of computational complexity, to counteract the problems arising due to the dense matrices produced by boundary element methods. By solving Laplace and Helmholtz problems via a single layer approach we show, through a series of numerical examples suitable for easy comparison with other numerical schemes, that one can indeed achieve extremely high rates of convergence of the pointwise potential through the utilisation of higher order B-spline-based ansatz functions.

  15. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  16. Multi-scale kinetic description of granular clusters: invariance, balance, and temperature

    NASA Astrophysics Data System (ADS)

    Capriz, Gianfranco; Mariano, Paolo Maria

    2017-12-01

    We discuss a multi-scale continuum representation of bodies made of several mass particles flowing independently each other. From an invariance procedure and a nonstandard balance of inertial actions, we derive the balance equations introduced in earlier work directly in pointwise form, essentially on the basis of physical plausibility. In this way, we analyze their foundations. Then, we propose a Boltzmann-type equation for the distribution of kinetic energies within control volumes in space and indicate how such a distribution allows us to propose a definition of (granular) temperature along processes far from equilibrium.

  17. On Building an A-Posteriori Index from Survey Data: A Case for Educational Planners' Assessment of Attitudes towards an Educational Innovation.

    ERIC Educational Resources Information Center

    Vazquez-Abad, Jesus; DePauw, Karen

    To simplify data from a large survey, it is desirable to classify subjects according to their attitudes toward certain issues, as measured by questions in the survey. Responses to 12 questions were identified as indicative of attitudes toward deschooling education. These attitudes were explained by means of patterns exhibited within the responses…

  18. The Effect of Substituting p for alpha on the Unconditional and Conditional Powers of a Null Hypothesis Test.

    ERIC Educational Resources Information Center

    Martuza, Victor R.; Engel, John D.

    Results from classical power analysis (Brewer, 1972) suggest that a researcher should not set a=p (when p is less than a) in a posteriori fashion when a study yields statistically significant results because of a resulting decrease in power. The purpose of the present report is to use Bayesian theory in examining the validity of this…

  19. At the origins of the Trojan Horse Method

    NASA Astrophysics Data System (ADS)

    Lattuada, Marcello

    2018-01-01

    During the seventies and eighties a long experimental research program on the quasi-free reactions at low energy was carried out by a small group of nuclear physicists, where Claudio Spitaleri was one of the main protagonists. Nowadays, a posteriori, the results of these studies can be considered an essential step preparatory to the application of the Trojan Horse Method (THM) in Nuclear Astrophysics.

  20. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  1. Leukocyte Recognition Using EM-Algorithm

    NASA Astrophysics Data System (ADS)

    Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.

    This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.

  2. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  3. Total protein measurement in canine cerebrospinal fluid: agreement between a turbidimetric assay and 2 dye-binding methods and determination of reference intervals using an indirect a posteriori method.

    PubMed

    Riond, B; Steffen, F; Schmied, O; Hofmann-Lehmann, R; Lutz, H

    2014-03-01

    In veterinary clinical laboratories, qualitative tests for total protein measurement in canine cerebrospinal fluid (CSF) have been replaced by quantitative methods, which can be divided into dye-binding assays and turbidimetric methods. There is a lack of validation data and reference intervals (RIs) for these assays. The aim of the present study was to assess agreement between the turbidimetric benzethonium chloride method and 2 dye-binding methods (Pyrogallol Red-Molybdate method [PRM], Coomassie Brilliant Blue [CBB] technique) for measurement of total protein concentration in canine CSF. Furthermore, RIs were determined for all 3 methods using an indirect a posteriori method. For assay comparison, a total of 118 canine CSF specimens were analyzed. For RIs calculation, clinical records of 401 canine patients with normal CSF analysis were studied and classified according to their final diagnosis in pathologic and nonpathologic values. The turbidimetric assay showed excellent agreement with the PRM assay (mean bias 0.003 g/L [-0.26-0.27]). The CBB method generally showed higher total protein values than the turbidimetric assay and the PRM assay (mean bias -0.14 g/L for turbidimetric and PRM assay). From 90 of 401 canine patients, nonparametric reference intervals (2.5%, 97.5% quantile) were calculated (turbidimetric assay and PRM method: 0.08-0.35 g/L (90% CI: 0.07-0.08/0.33-0.39); CBB method: 0.17-0.55 g/L (90% CI: 0.16-0.18/0.52-0.61). Total protein concentration in canine CSF specimens remained stable for up to 6 months of storage at -80°C. Due to variations among methods, RIs for total protein concentration in canine CSF have to be calculated for each method. The a posteriori method of RIs calculation described here should encourage other veterinary laboratories to establish RIs that are laboratory-specific. ©2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.

  4. Image-based topology for sensor gridlocking and association

    NASA Astrophysics Data System (ADS)

    Stanek, Clay J.; Javidi, Bahram; Yanni, Philip

    2002-07-01

    Correlation engines have been evolving since the implementation of radar. In modern sensor fusion architectures, correlation and gridlock filtering are required to produce common, continuous, and unambiguous tracks of all objects in the surveillance area. The objective is to provide a unified picture of the theatre or area of interest to battlefield decision makers, ultimately enabling them to make better inferences for future action and eliminate fratricide by reducing ambiguities. Here, correlation refers to association, which in this context is track-to-track association. A related process, gridlock filtering or gridlocking, refers to the reduction in navigation errors and sensor misalignment errors so that one sensor's track data can be accurately transformed into another sensor's coordinate system. As platforms gain multiple sensors, the correlation and gridlocking of tracks become significantly more difficult. Much of the existing correlation technology revolves around various interpretations of the generalized Bayesian decision rule: choose the action that minimizes conditional risk. One implementation of this principle equates the risk minimization statement to the comparison of ratios of a priori probability distributions to thresholds. The binary decision problem phrased in terms of likelihood ratios is also known as the famed Neyman-Pearson hypothesis test. Using another restatement of the principle for a symmetric loss function, risk minimization leads to a decision that maximizes the a posteriori probability distribution. Even for deterministic decision rules, situations can arise in correlation where there are ambiguities. For these situations, a common algorithm used is a sparse assignment technique such as the Munkres or JVC algorithm. Furthermore, associated tracks may be combined with the hope of reducing the positional uncertainty of a target or object identified by an existing track from the information of several fused/correlated tracks. Gridlocking is typically accomplished with some type of least-squares algorithm, such as the Kalman filtering technique, which attempts to locate the best bias error vector estimate from a set of correlated/fused track pairs. Here, we will introduce a new approach to this longstanding problem by adapting many of the familiar concepts from pattern recognition, ones certainly familiar to target recognition applications. Furthermore, we will show how this technique can lend itself to specialized processing, such as that available through an optical or hybrid correlator.

  5. Pointwise mutual information quantifies intratumor heterogeneity in tissue sections labeled with multiple fluorescent biomarkers.

    PubMed

    Spagnolo, Daniel M; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M; Lezon, Timothy R; Gough, Albert; Meyer, Dan E; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra

    2016-01-01

    Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression.

  6. Preictal dynamics of EEG complexity in intracranially recorded epileptic seizure: a case report.

    PubMed

    Bob, Petr; Roman, Robert; Svetlak, Miroslav; Kukleta, Miloslav; Chladek, Jan; Brazdil, Milan

    2014-11-01

    Recent findings suggest that neural complexity reflecting a number of independent processes in the brain may characterize typical changes during epileptic seizures and may enable to describe preictal dynamics. With respect to previously reported findings suggesting specific changes in neural complexity during preictal period, we have used measure of pointwise correlation dimension (PD2) as a sensitive indicator of nonstationary changes in complexity of the electroencephalogram (EEG) signal. Although this measure of complexity in epileptic patients was previously reported by Feucht et al (Applications of correlation dimension and pointwise dimension for non-linear topographical analysis of focal onset seizures. Med Biol Comput. 1999;37:208-217), it was not used to study changes in preictal dynamics. With this aim to study preictal changes of EEG complexity, we have examined signals from 11 multicontact depth (intracerebral) EEG electrodes located in 108 cortical and subcortical brain sites, and from 3 scalp EEG electrodes in a patient with intractable epilepsy, who underwent preoperative evaluation before epilepsy surgery. From those 108 EEG contacts, records related to 44 electrode contacts implanted into lesional structures and white matter were not included into the experimental analysis.The results show that in comparison to interictal period (at about 8-6 minutes before seizure onset), there was a statistically significant decrease in PD2 complexity in the preictal period at about 2 minutes before seizure onset in all 64 intracranial channels localized in various brain sites that were included into the analysis and in 3 scalp EEG channels as well. Presented results suggest that using PD2 in EEG analysis may have significant implications for research of preictal dynamics and prediction of epileptic seizures.

  7. Frequency doubling technology perimetry for detection of visual field progression in glaucoma: a pointwise linear regression analysis.

    PubMed

    Liu, Shu; Yu, Marco; Weinreb, Robert N; Lai, Gilda; Lam, Dennis Shun-Chiu; Leung, Christopher Kai-Shun

    2014-05-02

    We compared the detection of visual field progression and its rate of change between standard automated perimetry (SAP) and Matrix frequency doubling technology perimetry (FDTP) in glaucoma. We followed prospectively 217 eyes (179 glaucoma and 38 normal eyes) for SAP and FDTP testing at 4-month intervals for ≥36 months. Pointwise linear regression analysis was performed. A test location was considered progressing when the rate of change of visual sensitivity was ≤-1 dB/y for nonedge and ≤-2 dB/y for edge locations. Three criteria were used to define progression in an eye: ≥3 adjacent nonedge test locations (conservative), any three locations (moderate), and any two locations (liberal) progressed. The rate of change of visual sensitivity was calculated with linear mixed models. Of the 217 eyes, 6.1% and 3.9% progressed with the conservative criteria, 14.5% and 5.6% of eyes progressed with the moderate criteria, and 20.1% and 11.7% of eyes progressed with the liberal criteria by FDTP and SAP, respectively. Taking all test locations into consideration (total, 54 × 179 locations), FDTP detected more progressing locations (176) than SAP (103, P < 0.001). The rate of change of visual field mean deviation (MD) was significantly faster for FDTP (all with P < 0.001). No eyes showed progression in the normal group using the conservative and the moderate criteria. With a faster rate of change of visual sensitivity, FDTP detected more progressing eyes than SAP at a comparable level of specificity. Frequency doubling technology perimetry can provide a useful alternative to monitor glaucoma progression.

  8. Geophysical approaches to inverse problems: A methodological comparison. Part 1: A Posteriori approach

    NASA Technical Reports Server (NTRS)

    Seidman, T. I.; Munteanu, M. J.

    1979-01-01

    The relationships of a variety of general computational methods (and variances) for treating illposed problems such as geophysical inverse problems are considered. Differences in approach and interpretation based on varying assumptions as to, e.g., the nature of measurement uncertainties are discussed along with the factors to be considered in selecting an approach. The reliability of the results of such computation is addressed.

  9. Practical Considerations about Expected A Posteriori Estimation in Adaptive Testing: Adaptive A Priori, Adaptive Correction for Bias, and Adaptive Integration Interval.

    ERIC Educational Resources Information Center

    Raiche, Gilles; Blais, Jean-Guy

    In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…

  10. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  11. Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Li, Q.; Randerson, J. T.; Liou, K.

    2011-12-01

    We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.55°×0.66° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~20% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-July. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.

  12. Top-down Estimates of Biomass Burning Emissions of Black Carbon in the Western United States

    NASA Astrophysics Data System (ADS)

    Mao, Y.; Li, Q.; Randerson, J. T.; CHEN, D.; Zhang, L.; Liou, K.

    2012-12-01

    We apply a Bayesian linear inversion to derive top-down estimates of biomass burning emissions of black carbon (BC) in the western United States (WUS) for May-November 2006 by inverting surface BC concentrations from the IMPROVE network using the GEOS-Chem chemical transport model. Model simulations are conducted at both 2°×2.5° (globally) and 0.5°×0.667° (nested over North America) horizontal resolutions. We first improve the spatial distributions and seasonal and interannual variations of the BC emissions from the Global Fire Emissions Database (GFEDv2) using MODIS 8-day active fire counts from 2005-2007. The GFEDv2 emissions in N. America are adjusted for three zones: boreal N. America, temperate N. America, and Mexico plus Central America. The resulting emissions are then used as a priori for the inversion. The a posteriori emissions are 2-5 times higher than the a priori in California and the Rockies. Model surface BC concentrations using the a posteriori estimate provide better agreement with IMPROVE observations (~50% increase in the Taylor skill score), including improved ability to capture the observed variability especially during June-September. However, model surface BC concentrations are still biased low by ~30%. Comparisons with the Fire Locating and Modeling of Burning Emissions (FLAMBE) are included.

  13. A meta-learning system based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    The design of an efficient machine learning process through self-adaptation is a great challenge. The goal of meta-learning is to build a self-adaptive learning system that is constantly adapting to its specific (and dynamic) environment. To that end, the meta-learning mechanism must improve its bias dynamically by updating the current learning strategy in accordance with its available experiences or meta-knowledge. We suggest using genetic algorithms as the basis of an adaptive system. In this work, we propose a meta-learning system based on a combination of the a priori and a posteriori concepts. A priori refers to input information and knowledge available at the beginning in order to built and evolve one or more sets of parameters by exploiting the context of the system"s information. The self-learning component is based on genetic algorithms and neural Darwinism. A posteriori refers to the implicit knowledge discovered by estimation of the future states of parameters and is also applied to the finding of optimal parameters values. The in-progress research presented here suggests a framework for the discovery of knowledge that can support human experts in their intelligence information assessment tasks. The conclusion presents avenues for further research in genetic algorithms and their capability to learn to learn.

  14. Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio

    2018-04-01

    We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    JOHNSON, A.R.

    Biological control is any activity taken to prevent, limit, clean up, or remediate potential environmental, health and safety, or workplace quality impacts from plants, animals, or microorganisms. At Hanford the principal emphasis of biological control is to prevent the transport of radioactive contamination by biological vectors (plants, animals, or microorganisms), and where necessary, control and clean up resulting contamination. Other aspects of biological control at Hanford include industrial weed control (e.g.; tumbleweeds), noxious weed control (invasive, non-native plant species), and pest control (undesirable animals such as rodents and stinging insects, and microorganisms such as molds that adversely affect the qualitymore » of the workplace environment). Biological control activities may be either preventive (a priori) or in response to existing contamination spread (a posteriori). Surveillance activities, including ground, vegetation, flying insect, and other surveys, and a priori control actions, such as herbicide spraying and placing biological barriers, are important in preventing radioactive contamination spread. If surveillance discovers that biological vectors have spread radioactive contamination, a posteriori control measures, such as fixing contamination, followed by cleanup and removal of the contamination to an approved disposal location are typical response functions. In some cases remediation following the contamination cleanup and removal is necessary. Biological control activities for industrial weeds, noxious weeds and pests have similar modes of prevention and response.« less

  16. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  17. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximatedmore » Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.« less

  18. The effect of muscle contraction level on the cervical vestibular evoked myogenic potential (cVEMP): usefulness of amplitude normalization.

    PubMed

    Bogle, Jamie M; Zapala, David A; Criter, Robin; Burkard, Robert

    2013-02-01

    The cervical vestibular evoked myogenic potential (cVEMP) is a reflexive change in sternocleidomastoid (SCM) muscle contraction activity thought to be mediated by a saccular vestibulo-collic reflex. CVEMP amplitude varies with the state of the afferent (vestibular) limb of the vestibulo-collic reflex pathway, as well as with the level of SCM muscle contraction. It follows that in order for cVEMP amplitude to reflect the status of the afferent portion of the reflex pathway, muscle contraction level must be controlled. Historically, this has been accomplished by volitionally controlling muscle contraction level either with the aid of a biofeedback method, or by an a posteriori method that normalizes cVEMP amplitude by the level of muscle contraction. A posteriori normalization methods make the implicit assumption that mathematical normalization precisely removes the influence of the efferent limb of the vestibulo-collic pathway. With the cVEMP, however, we are violating basic assumptions of signal averaging: specifically, the background noise and the response are not independent. The influence of this signal-averaging violation on our ability to normalize cVEMP amplitude using a posteriori methods is not well understood. The aims of this investigation were to describe the effect of muscle contraction, as measured by a prestimulus electromyogenic estimate, on cVEMP amplitude and interaural amplitude asymmetry ratio, and to evaluate the benefit of using a commonly advocated a posteriori normalization method on cVEMP amplitude and asymmetry ratio variability. Prospective, repeated-measures design using a convenience sample. Ten healthy adult participants between 25 and 61 yr of age. cVEMP responses to 500 Hz tone bursts (120 dB pSPL) for three conditions describing maximum, moderate, and minimal muscle contraction. Mean (standard deviation) cVEMP amplitude and asymmetry ratios were calculated for each muscle-contraction condition. Repeated measures analysis of variance and t-tests compared the variability in cVEMP amplitude between sides and conditions. Linear regression analyses compared asymmetry ratios. Polynomial regression analyses described the corrected and uncorrected cVEMP amplitude growth functions. While cVEMP amplitude increased with increased muscle contraction, the relationship was not linear or even proportionate. In the majority of cases, once muscle contraction reached a certain "threshold" level, cVEMP amplitude increased rapidly and then saturated. Normalizing cVEMP amplitudes did not remove the relationship between cVEMP amplitude and muscle contraction level. As muscle contraction increased, the normalized amplitude increased, and then decreased, corresponding with the observed amplitude saturation. Abnormal asymmetry ratios (based on values reported in the literature) were noted for four instances of uncorrected amplitude asymmetry at less than maximum muscle contraction levels. Amplitude normalization did not substantially change the number of observed asymmetry ratios. Because cVEMP amplitude did not typically grow proportionally with muscle contraction level, amplitude normalization did not lead to stable cVEMP amplitudes or asymmetry ratios across varying muscle contraction levels. Until we better understand the relationships between muscle contraction level, surface electromyography (EMG) estimates of muscle contraction level, and cVEMP amplitude, the application of normalization methods to correct cVEMP amplitude appears unjustified. American Academy of Audiology.

  19. Exploring image data assimilation in the prospect of high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Verron, J. A.; Duran, M.; Gaultier, L.; Brankart, J. M.; Brasseur, P.

    2016-02-01

    Many recent works show the key importance of studying the ocean at fine scales including the meso- and submesoscales. Satellite observations such as ocean color data provide informations on a wide range of scales but do not directly provide information on ocean dynamics. Satellite altimetry provide informations on the ocean dynamic topography (SSH) but so far with a limited resolution in space and even more, in time. However, in the near future, high-resolution SSH data (e.g. SWOT) will give a vision of the dynamic topography at such fine space resolution. This raises some challenging issues for data assimilation in physical oceanography: develop reliable methodology to assimilate high resolution data, make integrated use of various data sets including biogeochemical data, and even more simply, solve the challenge of handling large amont of data and huge state vectors. In this work, we propose to consider structured information rather than pointwise data. First, we take an image data assimilation approach in studying the feasibility of inverting tracer observations from Sea Surface Temperature and/or Ocean Color datasets, to improve the description of mesoscale dynamics provided by altimetric observations. Finite Size Lyapunov Exponents are used as an image proxy. The inverse problem is formulated in a Bayesian framework and expressed in terms of a cost function measuring the misfits between the two images. Second, we explore the inversion of SWOT-like high resolution SSH data and more especially the various possible proxies of the actual SSH that could be used to control the ocean circulation at various scales. One focus is made on controlling the subsurface ocean from surface only data. A key point lies in the errors and uncertainties that are associated to SWOT data.

  20. Uncertainties for two-dimensional models of solar rotation from helioseismic eigenfrequency splitting

    NASA Technical Reports Server (NTRS)

    Genovese, Christopher R.; Stark, Philip B.; Thompson, Michael J.

    1995-01-01

    Observed solar p-mode frequency splittings can be used to estimate angular velocity as a function of position in the solar interior. Formal uncertainties of such estimates depend on the method of estimation (e.g., least-squares), the distribution of errors in the observations, and the parameterization imposed on the angular velocity. We obtain lower bounds on the uncertainties that do not depend on the method of estimation; the bounds depend on an assumed parameterization, but the fact that they are lower bounds for the 'true' uncertainty does not. Ninety-five percent confidence intervals for estimates of the angular velocity from 1986 Big Bear Solar Observatory (BBSO) data, based on a 3659 element tensor-product cubic-spline parameterization, are everywhere wider than 120 nHz, and exceed 60,000 nHz near the core. When compared with estimates of the solar rotation, these bounds reveal that useful inferences based on pointwise estimates of the angular velocity using 1986 BBSO splitting data are not feasible over most of the Sun's volume. The discouraging size of the uncertainties is due principally to the fact that helioseismic measurements are insensitive to changes in the angular velocity at individual points, so estimates of point values based on splittings are extremely uncertain. Functionals that measure distributed 'smooth' properties are, in general, better constrained than estimates of the rotation at a point. For example, the uncertainties in estimated differences of average rotation between adjacent blocks of about 0.001 solar volumes across the base of the convective zone are much smaller, and one of several estimated differences we compute appears significant at the 95% level.

  1. Consistent regional fluxes of CH4 and CO2 inferred from GOSAT proxy XCH4 : XCO2 retrievals, 2010-2014

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Palmer, Paul I.; Bösch, Hartmut; Parker, Robert J.; Webb, Alex J.; Correia, Caio S. C.; Deutscher, Nicholas M.; Domingues, Lucas G.; Feist, Dietrich G.; Gatti, Luciana V.; Gloor, Emanuel; Hase, Frank; Kivi, Rigel; Liu, Yi; Miller, John B.; Morino, Isamu; Sussmann, Ralf; Strong, Kimberly; Uchino, Osamu; Wang, Jing; Zahn, Andreas

    2017-04-01

    We use the GEOS-Chem global 3-D model of atmospheric chemistry and transport and an ensemble Kalman filter to simultaneously infer regional fluxes of methane (CH4) and carbon dioxide (CO2) directly from GOSAT retrievals of XCH4 : XCO2, using sparse ground-based CH4 and CO2 mole fraction data to anchor the ratio. This work builds on the previously reported theory that takes into account that (1) these ratios are less prone to systematic error than either the full-physics data products or the proxy CH4 data products; and (2) the resulting CH4 and CO2 fluxes are self-consistent. We show that a posteriori fluxes inferred from the GOSAT data generally outperform the fluxes inferred only from in situ data, as expected. GOSAT CH4 and CO2 fluxes are consistent with global growth rates for CO2 and CH4 reported by NOAA and have a range of independent data including new profile measurements (0-7 km) over the Amazon Basin that were collected specifically to help validate GOSAT over this geographical region. We find that large-scale multi-year annual a posteriori CO2 fluxes inferred from GOSAT data are similar to those inferred from the in situ surface data but with smaller uncertainties, particularly over the tropics. GOSAT data are consistent with smaller peak-to-peak seasonal amplitudes of CO2 than either the a priori or in situ inversion, particularly over the tropics and the southern extratropics. Over the northern extratropics, GOSAT data show larger uptake than the a priori but less than the in situ inversion, resulting in small net emissions over the year. We also find evidence that the carbon balance of tropical South America was perturbed following the droughts of 2010 and 2012 with net annual fluxes not returning to an approximate annual balance until 2013. In contrast, GOSAT data significantly changed the a priori spatial distribution of CH4 emission with a 40 % increase over tropical South America and tropical Asia and a smaller decrease over Eurasia and temperate South America. We find no evidence from GOSAT that tropical South American CH4 fluxes were dramatically affected by the two large-scale Amazon droughts. However, we find that GOSAT data are consistent with double seasonal peaks in Amazonian fluxes that are reproduced over the 5 years we studied: a small peak from January to April and a larger peak from June to October, which are likely due to superimposed emissions from different geographical regions.

  2. Point-Wise Phase Matching for Nonlinear Frequency Generation in Dielectric Resonators

    NASA Technical Reports Server (NTRS)

    Yu, Nan (Inventor); Strekalov, Dmitry V. (Inventor); Lin, Guoping (Inventor)

    2016-01-01

    An optical resonator fabricated from a uniaxial birefringent crystal, such as beta barium borate. The crystal is cut with the optical axis not perpendicular to a face of the cut crystal. In some cases the optical axis lies in the plane of the cut crystal face. An incident (input) electromagnetic signal (which can range from the infrared through the visible to the ultraviolet) is applied to the resonator. An output signal is recovered which has a frequency that is an integer multiple of the frequency of the input signal. In some cases a prism is used to evanescently couple the input and the output signals to the resonator.

  3. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  4. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  5. On character amenability of Banach algebras

    NASA Astrophysics Data System (ADS)

    Kaniuth, E.; Lau, A. T.; Pym, J.

    2008-08-01

    We continue our work [E. Kaniuth, A.T. Lau, J. Pym, On [phi]-amenability of Banach algebras, Math. Proc. Cambridge Philos. Soc. 144 (2008) 85-96] in the study of amenability of a Banach algebra A defined with respect to a character [phi] of A. Various necessary and sufficient conditions of a global and a pointwise nature are found for a Banach algebra to possess a [phi]-mean of norm 1. We also completely determine the size of the set of [phi]-means for a separable weakly sequentially complete Banach algebra A with no [phi]-mean in A itself. A number of illustrative examples are discussed.

  6. Statistical considerations in the development of injury risk functions.

    PubMed

    McMurry, Timothy L; Poplin, Gerald S

    2015-01-01

    We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.

  7. Clinical value of pointwise encoding time reduction with radial acquisition (PETRA) MR sequence in assessing internal derangement of knee.

    PubMed

    Kim, Sung Kwan; Kim, Donghyun; Lee, Sun Joo; Choo, Hye Jung; Oh, Minkyung; Son, Yohan; Paek, MunYoung

    2018-06-01

    The purpose was to evaluate the clinical value of PETRA sequence for the diagnosis of internal derangement of the knee. The major structures of the knee in 34 patients were evaluated and compared among conventional MRI findings, PETRA images, and arthroscopic findings. The specificities of PETRA with 2D FSE sequence were higher for meniscal lesions than those obtained when using 2D FSE alone. Using PETRA images along with conventional 2D FSE images can increase the accuracy of assessing internal derangements of the knee and, specifically, meniscal lesions. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  9. Density of convex intersections and applications

    PubMed Central

    Rautenberg, C. N.; Rösel, S.

    2017-01-01

    In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301

  10. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  11. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  12. RadVel: General toolkit for modeling Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-01-01

    RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.

  13. CANDID: Companion Analysis and Non-Detection in Interferometric Data

    NASA Astrophysics Data System (ADS)

    Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.

    2015-05-01

    CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.

  14. Application of the quantum spin glass theory to image restoration.

    PubMed

    Inoue, J I

    2001-04-01

    Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.

  15. Covariation bias for food-related control is associated with eating disorders symptoms in normal adolescents.

    PubMed

    Mayer, Birgit; Muris, Peter; Kramer Freher, Nancy; Stout, Janne; Polak, Marike

    2012-12-01

    Covariation bias refers to the phenomenon of overestimating the contingency between certain stimuli and negative outcomes, which is considered as a heuristic playing a role in the maintenance of certain types of psychopathology. In the present study, covariation bias was investigated within the context of eating pathology. In a sample of 148 adolescents (101 girls, 47 boys; mean age 15.3 years), a priori and a posteriori contingencies were measured between words referring to control and loss of control over eating behavior, on the one hand, and fear, disgust, positive and neutral outcomes, on the other hand. Results indicated that all adolescents displayed an a priori covariation bias reflecting an overestimation of the contingency of words referring to loss of control over eating behavior and fear- and disgust-relevant outcomes, while words referring to control over eating behavior were more often associated with positive and neutral outcomes. This bias was unrelated to level of eating disorder symptoms. In the case of a posteriori contingency estimates no overall bias could be observed, but some evidence was found indicating that girls with higher levels of eating disorder symptoms displayed a stronger covariation bias. These findings provide further support for the notion that covariation bias is involved in eating pathology, and also demonstrate that this type of cognitive distortion is already present in adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. [Population pharmacokinetics applied to optimising cisplatin doses in cancer patients].

    PubMed

    Ramón-López, A; Escudero-Ortiz, V; Carbonell, V; Pérez-Ruixo, J J; Valenzuela, B

    2012-01-01

    To develop and internally validate a population pharmacokinetics model for cisplatin and assess its prediction capacity for personalising doses in cancer patients. Cisplatin plasma concentrations in forty-six cancer patients were used to determine the pharmacokinetic parameters of a two-compartment pharmacokinetic model implemented in NONMEN VI software. Pharmacokinetic parameter identification capacity was assessed using the parametric bootstrap method and the model was validated using the nonparametric bootstrap method and standardised visual and numerical predictive checks. The final model's prediction capacity was evaluated in terms of accuracy and precision during the first (a priori) and second (a posteriori) chemotherapy cycles. Mean population cisplatin clearance is 1.03 L/h with an interpatient variability of 78.0%. Estimated distribution volume at steady state was 48.3 L, with inter- and intrapatient variabilities of 31,3% and 11,7%, respectively. Internal validation confirmed that the population pharmacokinetics model is appropriate to describe changes over time in cisplatin plasma concentrations, as well as its variability in the study population. The accuracy and precision of a posteriori prediction of cisplatin concentrations improved by 21% and 54% compared to a priori prediction. The population pharmacokinetic model developed adequately described the changes in cisplatin plasma concentrations in cancer patients and can be used to optimise cisplatin dosing regimes accurately and precisely. Copyright © 2011 SEFH. Published by Elsevier Espana. All rights reserved.

  17. Three-dimensional elliptic grid generation technique with application to turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Chen, S. C.; Schwab, J. R.

    1988-01-01

    Described is a numerical method for generating 3-D grids for turbomachinery computational fluid dynamic codes. The basic method is general and involves the solution of a quasi-linear elliptic partial differential equation via pointwise relaxation with a local relaxation factor. It allows specification of the grid point distribution on the boundary surfaces, the grid spacing off the boundary surfaces, and the grid orthogonality at the boundary surfaces. A geometry preprocessor constructs the grid point distributions on the boundary surfaces for general turbomachinery cascades. Representative results are shown for a C-grid and an H-grid for a turbine rotor. Two appendices serve as user's manuals for the basic solver and the geometry preprocessor.

  18. Digital Processing Of Young's Fringes In Speckle Photography

    NASA Astrophysics Data System (ADS)

    Chen, D. J.; Chiang, F. P.

    1989-01-01

    A new technique for fully automatic diffraction fringe measurement in point-wise speckle photograph analysis is presented in this paper. The fringe orientation and spacing are initially estimated with the help of 1-D FFT. A 2-D convolution filter is then applied to enhance the estimated image . High signal-to-noise rate (SNR) fringe pattern is achieved which makes it feasible for precise determination of the displacement components. The halo-effect is also optimally eliminated in a new way. With the computation time compared favorably with those of 2-D autocorrelation method and the iterative 2-D FFT method. High reliability and accurate determination of displacement components are achieved over a wide range of fringe density.

  19. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  20. Evaluating a 3-D transport model of atmospheric CO2 using ground-based, aircraft, and space-borne data

    NASA Astrophysics Data System (ADS)

    Feng, L.; Palmer, P. I.; Yang, Y.; Yantosca, R. M.; Kawa, S. R.; Paris, J.-D.; Matsueda, H.; Machida, T.

    2011-03-01

    We evaluate the GEOS-Chem atmospheric transport model (v8-02-01) of CO2 over 2003-2006, driven by GEOS-4 and GEOS-5 meteorology from the NASA Goddard Global Modeling and Assimilation Office, using surface, aircraft and space-borne concentration measurements of CO2. We use an established ensemble Kalman Filter to estimate a posteriori biospheric+biomass burning (BS + BB) and oceanic (OC) CO2 fluxes from 22 geographical regions, following the TransCom-3 protocol, using boundary layer CO2 data from a subset of GLOBALVIEW surface sites. Global annual net BS + BB + OC CO2 fluxes over 2004-2006 for GEOS-4 (GEOS-5) meteorology are -4.4 ± 0.9 (-4.2 ± 0.9), -3.9 ± 0.9 (-4.5 ± 0.9), and -5.2 ± 0.9 (-4.9 ± 0.9) PgC yr-1, respectively. After taking into account anthropogenic fossil fuel and bio-fuel emissions, the global annual net CO2 emissions for 2004-2006 are estimated to be 4.0 ± 0.9 (4.2 ± 0.9), 4.8 ± 0.9 (4.2 ± 0.9), and 3.8 ± 0.9 (4.1 ± 0.9) PgC yr-1, respectively. The estimated 3-yr total net emission for GEOS-4 (GEOS-5) meteorology is equal to 12.5 (12.4) PgC, agreeing with other recent top-down estimates (12-13 PgC). The regional a posteriori fluxes are broadly consistent in the sign and magnitude of the TransCom-3 study for 1992-1996, but we find larger net sinks over northern and southern continents. We find large departures from our a priori over Europe during summer 2003, over temperate Eurasia during 2004, and over North America during 2005, reflecting an incomplete description of terrestrial carbon dynamics. We find GEOS-4 (GEOS-5) a posteriori CO2 concentrations reproduce the observed surface trend of 1.91-2.43 ppm yr-1 (parts per million per year), depending on latitude, within 0.15 ppm yr-1 (0.2 ppm yr-1) and the seasonal cycle within 0.2 ppm (0.2 ppm) at all latitudes. We find the a posteriori model reproduces the aircraft vertical profile measurements of CO2 over North America and Siberia generally within 1.5 ppm in the free and upper troposphere but can be biased by up to 4-5 ppm in the boundary layer at the start and end of the growing season. The model has a small negative bias in the free troposphere CO2 trend (1.95-2.19 ppm yr-1) compared to AIRS data which has a trend of 2.21-2.63 ppm yr-1 during 2004-2006, consistent with surface data. Model CO2 concentrations in the upper troposphere, evaluated using CONTRAIL (Comprehensive Observation Network for TRace gases by AIrLiner) aircraft measurements, reproduce the magnitude and phase of the seasonal cycle of CO2 in both hemispheres. We generally find that the GEOS meteorology reproduces much of the observed tropospheric CO2 variability, suggesting that these meteorological fields will help make significant progress in understanding carbon fluxes as more data become available.

  1. Shape-intensity prior level set combining probabilistic atlas and probability map constrains for automatic liver segmentation from abdominal CT images.

    PubMed

    Wang, Jinke; Cheng, Yuanzhi; Guo, Changyong; Wang, Yadong; Tamura, Shinichi

    2016-05-01

    Propose a fully automatic 3D segmentation framework to segment liver on challenging cases that contain the low contrast of adjacent organs and the presence of pathologies from abdominal CT images. First, all of the atlases are weighted in the selected training datasets by calculating the similarities between the atlases and the test image to dynamically generate a subject-specific probabilistic atlas for the test image. The most likely liver region of the test image is further determined based on the generated atlas. A rough segmentation is obtained by a maximum a posteriori classification of probability map, and the final liver segmentation is produced by a shape-intensity prior level set in the most likely liver region. Our method is evaluated and demonstrated on 25 test CT datasets from our partner site, and its results are compared with two state-of-the-art liver segmentation methods. Moreover, our performance results on 10 MICCAI test datasets are submitted to the organizers for comparison with the other automatic algorithms. Using the 25 test CT datasets, average symmetric surface distance is [Formula: see text] mm (range 0.62-2.12 mm), root mean square symmetric surface distance error is [Formula: see text] mm (range 0.97-3.01 mm), and maximum symmetric surface distance error is [Formula: see text] mm (range 12.73-26.67 mm) by our method. Our method on 10 MICCAI test data sets ranks 10th in all the 47 automatic algorithms on the site as of July 2015. Quantitative results, as well as qualitative comparisons of segmentations, indicate that our method is a promising tool to improve the efficiency of both techniques. The applicability of the proposed method to some challenging clinical problems and the segmentation of the liver are demonstrated with good results on both quantitative and qualitative experimentations. This study suggests that the proposed framework can be good enough to replace the time-consuming and tedious slice-by-slice manual segmentation approach.

  2. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1998-01-01

    A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.

  3. Color lensless digital holographic microscopy with micrometer resolution.

    PubMed

    Garcia-Sucerquia, Jorge

    2012-05-15

    Color digital lensless holographic microscopy with micrometer resolution is presented. Multiwavelength illumination of a biological sample and a posteriori color composition of the amplitude images individually reconstructed are used to obtain full-color representation of the microscopic specimen. To match the sizes of the reconstructed holograms for each wavelength, a reconstruction algorithm that allows for choosing the pixel size at the reconstruction plane independently of the wavelength and the reconstruction distance is used. The method is illustrated with experimental results.

  4. Approaches to the automatic generation and control of finite element meshes

    NASA Technical Reports Server (NTRS)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  5. Wealth dynamics in a sentiment-driven market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail

    2017-12-01

    We study dynamics of a simulated world with stock and money, driven by the externally given processes which we refer to as sentiments. The considered sentiments influence the buy/sell stock trading attitude, the perceived price uncertainty, and the trading intensity of all or a part of the market participants. We study how the wealth of market participants evolves in time in such an environment. We discuss the opposite perspective in which the parameters of the sentiment processes can be inferred a posteriori from the observed market behavior.

  6. Eulerian Time-Domain Filtering for Spatial LES

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    Eulerian time-domain filtering seems to be appropriate for LES (large eddy simulation) of flows whose large coherent structures convect approximately at a common characteristic velocity; e.g., mixing layers, jets, and wakes. For these flows, we develop an approach to LES based on an explicit second-order digital Butterworth filter, which is applied in,the time domain in an Eulerian context. The approach is validated through a priori and a posteriori analyses of the simulated flow of a heated, subsonic, axisymmetric jet.

  7. A Posteriori Quantification of Rate-Controlling Effects from High-Intensity Turbulence-Flame Interactions Using 4D Measurements

    DTIC Science & Technology

    2016-11-22

    Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e

  8. Modeling Tropical Cyclone Storm Surge and Wind Induced Risk Along the Bay of Bengal Coastline Using a Statistical Copula

    NASA Astrophysics Data System (ADS)

    Bushra, N.; Trepanier, J. C.; Rohli, R. V.

    2017-12-01

    High winds, torrential rain, and storm surges from tropical cyclones (TCs) cause massive destruction to property and cost the lives of many people. The coastline of the Bay of Bengal (BoB) ranks as one of the most susceptible to TC storm surges in the world due to low-lying elevation and a high frequency of occurrence. Bangladesh suffers the most due to its geographical setting and population density. Various models have been developed to predict storm surge in this region but none of them quantify statistical risk with empirical data. This study describes the relationship and dependency between empirical TC storm surge and peak reported wind speed at the BoB using a bivariate statistical copula and data from 1885-2011. An Archimedean, Gumbel copula with margins defined by the empirical distributions is specified as the most appropriate choice for the BoB. The model provides return periods for pairs of TC storm surge and peak wind along the BoB coastline. The BoB can expect a TC with peak reported winds of at least 24 m s-1 and surge heights of at least 4.0 m, on average, once every 3.2 years, with a quartile pointwise confidence interval of 2.7-3.8 years. In addition, the BoB can expect peak reported winds of 62 m s-1 and surge heights of at least 8.0 m, on average, once every 115.4 years, with a quartile pointwise confidence interval of 55.8-381.1 years. The purpose of the analysis is to increase the understanding of these dangerous TC characteristics to reduce fatalities and monetary losses into the future. Application of the copula will mitigate future threats of storm surge impacts on coastal communities of the BoB.

  9. Pointwise mutual information quantifies intratumor heterogeneity in tissue sections labeled with multiple fluorescent biomarkers

    PubMed Central

    Spagnolo, Daniel M.; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M.; Lezon, Timothy R.; Gough, Albert; Meyer, Dan E.; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V.; Taylor, D. Lansing; Chennubhotla, S. Chakra

    2016-01-01

    Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression. PMID:27994939

  10. Six-month Longitudinal Comparison of a Portable Tablet Perimeter With the Humphrey Field Analyzer.

    PubMed

    Prea, Selwyn Marc; Kong, Yu Xiang George; Mehta, Aditi; He, Mingguang; Crowston, Jonathan G; Gupta, Vinay; Martin, Keith R; Vingrys, Algis J

    2018-06-01

    To establish the medium-term repeatability of the iPad perimetry app Melbourne Rapid Fields (MRF) compared to Humphrey Field Analyzer (HFA) 24-2 SITA-standard and SITA-fast programs. Multicenter longitudinal observational clinical study. Sixty patients (stable glaucoma/ocular hypertension/glaucoma suspects) were recruited into a 6-month longitudinal clinical study with visits planned at baseline and at 2, 4, and 6 months. At each visit patients undertook visual field assessment using the MRF perimetry application and either HFA SITA-fast (n = 21) or SITA-standard (n = 39). The primary outcome measure was the association and repeatability of mean deviation (MD) for the MRF and HFA tests. Secondary measures were the point-wise threshold and repeatability for each test, as well as test time. MRF was similar to SITA-fast in speed and significantly faster than SITA-standard (MRF 4.6 ± 0.1 minutes vs SITA-fast 4.3 ± 0.2 minutes vs SITA-standard 6.2 ± 0.1 minutes, P < .001). Intraclass correlation coefficients (ICC) between MRF and SITA-fast for MD at the 4 visits ranged from 0.71 to 0.88. ICC values between MRF and SITA-standard for MD ranged from 0.81 to 0.90. Repeatability of MRF MD outcomes was excellent, with ICC for baseline and the 6-month visit being 0.98 (95% confidence interval: 0.96-0.99). In comparison, ICC at 6-month retest for SITA-fast was 0.95 and SITA-standard 0.93. Fewer points changed with the MRF, although for those that did, the MRF gave greater point-wise variability than did the SITA tests. MRF correlated strongly with HFA across 4 visits over a 6-month period, and has good test-retest reliability. MRF is suitable for monitoring visual fields in settings where conventional perimetry is not readily accessible. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Properties of perimetric threshold estimates from Full Threshold, SITA Standard, and SITA Fast strategies.

    PubMed

    Artes, Paul H; Iwase, Aiko; Ohno, Yuko; Kitazawa, Yoshiaki; Chauhan, Balwantray C

    2002-08-01

    To investigate the distributions of threshold estimates with the Swedish Interactive Threshold Algorithms (SITA) Standard, SITA Fast, and the Full Threshold algorithm (Humphrey Field Analyzer; Zeiss-Humphrey Instruments, Dublin, CA) and to compare the pointwise test-retest variability of these strategies. One eye of 49 patients (mean age, 61.6 years; range, 22-81) with glaucoma (Mean Deviation mean, -7.13 dB; range, +1.8 to -23.9 dB) was examined four times with each of the three strategies. The mean and median SITA Standard and SITA Fast threshold estimates were compared with a "best available" estimate of sensitivity (mean results of three Full Threshold tests). Pointwise 90% retest limits (5th and 95th percentiles of retest thresholds) were derived to assess the reproducibility of individual threshold estimates. The differences between the threshold estimates of the SITA and Full Threshold strategies were largest ( approximately 3 dB) for midrange sensitivities ( approximately 15 dB). The threshold distributions of SITA were considerably different from those of the Full Threshold strategy. The differences remained of similar magnitude when the analysis was repeated on a subset of 20 locations that are examined early during the course of a Full Threshold examination. With sensitivities above 25 dB, both SITA strategies exhibited lower test-retest variability than the Full Threshold strategy. Below 25 dB, the retest intervals of SITA Standard were slightly smaller than those of the Full Threshold strategy, whereas those of SITA Fast were larger. SITA Standard may be superior to the Full Threshold strategy for monitoring patients with visual field loss. The greater test-retest variability of SITA Fast in areas of low sensitivity is likely to offset the benefit of even shorter test durations with this strategy. The sensitivity differences between the SITA and Full Threshold strategies may relate to factors other than reduced fatigue. They are, however, small in comparison to the test-retest variability.

  12. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  13. High-order polygonal discontinuous Petrov-Galerkin (PolyDPG) methods using ultraweak formulations

    NASA Astrophysics Data System (ADS)

    Vaziri Astaneh, Ali; Fuentes, Federico; Mora, Jaime; Demkowicz, Leszek

    2018-04-01

    This work represents the first endeavor in using ultraweak formulations to implement high-order polygonal finite element methods via the discontinuous Petrov-Galerkin (DPG) methodology. Ultraweak variational formulations are nonstandard in that all the weight of the derivatives lies in the test space, while most of the trial space can be chosen as copies of $L^2$-discretizations that have no need to be continuous across adjacent elements. Additionally, the test spaces are broken along the mesh interfaces. This allows one to construct conforming polygonal finite element methods, termed here as PolyDPG methods, by defining most spaces by restriction of a bounding triangle or box to the polygonal element. The only variables that require nontrivial compatibility across elements are the so-called interface or skeleton variables, which can be defined directly on the element boundaries. Unlike other high-order polygonal methods, PolyDPG methods do not require ad hoc stabilization terms thanks to the crafted stability of the DPG methodology. A proof of convergence of the form $h^p$ is provided and corroborated through several illustrative numerical examples. These include polygonal meshes with $n$-sided convex elements and with highly distorted concave elements, as well as the modeling of discontinuous material properties along an arbitrary interface that cuts a uniform grid. Since PolyDPG methods have a natural a posteriori error estimator a polygonal adaptive strategy is developed and compared to standard adaptivity schemes based on constrained hanging nodes. This work is also accompanied by an open-source $\\texttt{PolyDPG}$ software supporting polygonal and conventional elements.

  14. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  15. Robust Statistical Fusion of Image Labels

    PubMed Central

    Landman, Bennett A.; Asman, Andrew J.; Scoggins, Andrew G.; Bogovic, John A.; Xing, Fangxu; Prince, Jerry L.

    2011-01-01

    Image labeling and parcellation (i.e. assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability. PMID:22010145

  16. Controlling the Rate of GWAS False Discoveries

    PubMed Central

    Brzyski, Damian; Peterson, Christine B.; Sobczyk, Piotr; Candès, Emmanuel J.; Bogdan, Malgorzata; Sabatti, Chiara

    2017-01-01

    With the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR-controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR-controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on prescreening to identify the level of resolution of distinct hypotheses. We show how FDR-controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single markers and multiple regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the North Finland Birth Cohort 66 cohort study. PMID:27784720

  17. Efficiency assessment of using satellite data for crop area estimation in Ukraine

    NASA Astrophysics Data System (ADS)

    Gallego, Francisco Javier; Kussul, Nataliia; Skakun, Sergii; Kravchenko, Oleksii; Shelestov, Andrii; Kussul, Olga

    2014-06-01

    The knowledge of the crop area is a key element for the estimation of the total crop production of a country and, therefore, the management of agricultural commodities markets. Satellite data and derived products can be effectively used for stratification purposes and a-posteriori correction of area estimates from ground observations. This paper presents the main results and conclusions of the study conducted in 2010 to explore feasibility and efficiency of crop area estimation in Ukraine assisted by optical satellite remote sensing images. The study was carried out on three oblasts in Ukraine with a total area of 78,500 km2. The efficiency of using images acquired by several satellite sensors (MODIS, Landsat-5/TM, AWiFS, LISS-III, and RapidEye) combined with a field survey on a stratified sample of square segments for crop area estimation in Ukraine is assessed. The main criteria used for efficiency analysis are as follows: (i) relative efficiency that shows how much time the error of area estimates can be reduced with satellite images, and (ii) cost-efficiency that shows how much time the costs of ground surveys for crop area estimation can be reduced with satellite images. These criteria are applied to each satellite image type separately, i.e., no integration of images acquired by different sensors is made, to select the optimal dataset. The study found that only MODIS and Landsat-5/TM reached cost-efficiency thresholds while AWiFS, LISS-III, and RapidEye images, due to its high price, were not cost-efficient for crop area estimation in Ukraine at oblast level.

  18. Retrieval of stratospheric ozone and nitrogen dioxide profiles from Odin Optical Spectrograph and Infrared Imager System (OSIRIS) limb-scattered sunlight measurements

    NASA Astrophysics Data System (ADS)

    Haley, Craig Stuart

    2009-12-01

    Key to understanding and predicting the effects of global environmental problems such as ozone depletion and global warming is a detailed understanding of the atmospheric processes, both dynamical and chemical. Essential to this understanding are accurate global data sets of atmospheric constituents with adequate temporal and spatial (vertical and horizontal) resolutions. For this purpose the Canadian satellite instrument OSIRIS (Optical Spectrograph and Infrared Imager System) was launched on the Odin satellite in 2001. OSIRIS is primarily designed to measure minor stratospheric constituents, including ozone (O3) and nitrogen dioxide (NO2), employing the novel limb-scattered sunlight technique, which can provide both good vertical resolution and near global coverage. This dissertation presents a method to retrieve stratospheric O 3 and NO2 from the OSIRIS limb-scatter observations. The retrieval method incorporates an a posteriori optimal estimator combined with an intermediate spectral analysis, specifically differential optical absorption spectroscopy (DOAS). A detailed description of the retrieval method is presented along with the results of a thorough error analysis and a geophysical validation exercise. It is shown that OSIRIS limb-scatter observations successfully produce accurate stratospheric O3 and NO2 number density profiles throughout the stratosphere, clearly demonstrating the strength of the limb-scatter technique. The OSIRIS observations provide an extremely useful data set that is of particular importance for studies of the chemistry of the middle atmosphere. The long OSIRIS record of stratospheric ozone and nitrogen dioxide may also prove useful for investigating variability and trends.

  19. Controlling the Rate of GWAS False Discoveries.

    PubMed

    Brzyski, Damian; Peterson, Christine B; Sobczyk, Piotr; Candès, Emmanuel J; Bogdan, Malgorzata; Sabatti, Chiara

    2017-01-01

    With the rise of both the number and the complexity of traits of interest, control of the false discovery rate (FDR) in genetic association studies has become an increasingly appealing and accepted target for multiple comparison adjustment. While a number of robust FDR-controlling strategies exist, the nature of this error rate is intimately tied to the precise way in which discoveries are counted, and the performance of FDR-controlling procedures is satisfactory only if there is a one-to-one correspondence between what scientists describe as unique discoveries and the number of rejected hypotheses. The presence of linkage disequilibrium between markers in genome-wide association studies (GWAS) often leads researchers to consider the signal associated to multiple neighboring SNPs as indicating the existence of a single genomic locus with possible influence on the phenotype. This a posteriori aggregation of rejected hypotheses results in inflation of the relevant FDR. We propose a novel approach to FDR control that is based on prescreening to identify the level of resolution of distinct hypotheses. We show how FDR-controlling strategies can be adapted to account for this initial selection both with theoretical results and simulations that mimic the dependence structure to be expected in GWAS. We demonstrate that our approach is versatile and useful when the data are analyzed using both tests based on single markers and multiple regression. We provide an R package that allows practitioners to apply our procedure on standard GWAS format data, and illustrate its performance on lipid traits in the North Finland Birth Cohort 66 cohort study. Copyright © 2017 by the Genetics Society of America.

  20. Calibration of a subcutaneous amperometric glucose sensor implanted for 7 days in diabetic patients. Part 2. Superiority of the one-point calibration method.

    PubMed

    Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K

    2002-08-01

    Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.

  1. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  2. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  3. Using the in-line component for fixed-wing EM 1D inversion

    NASA Astrophysics Data System (ADS)

    Smiarowski, Adam

    2015-09-01

    Numerous authors have discussed the utility of multicomponent measurements. Generally speaking, for a vertical-oriented dipole source, the measured vertical component couples to horizontal planar bodies while the horizontal in-line component couples best to vertical planar targets. For layered-earth cases, helicopter EM systems have little or no in-line component response and as a result much of the in-line signal is due to receiver coil rotation and appears as noise. In contrast to this, the in-line component of a fixed-wing airborne electromagnetic (AEM) system with large transmitter-receiver offset can be substantial, exceeding the vertical component in conductive areas. This paper compares the in-line and vertical response of a fixed-wing airborne electromagnetic (AEM) system using a half-space model and calculates sensitivity functions. The a posteriori inversion model parameter uncertainty matrix is calculated for a bathymetry model (conductive layer over more resistive half-space) for two inversion cases; use of vertical component alone is compared to joint inversion of vertical and in-line components. The joint inversion is able to better resolve model parameters. An example is then provided using field data from a bathymetry survey to compare the joint inversion to vertical component only inversion. For each inversion set, the difference between the inverted water depth and ship-measured bathymetry is calculated. The result is in general agreement with that expected from the a posteriori inversion model parameter uncertainty calculation.

  4. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  5. High-Speed Linear Raman Spectroscopy for Instability Analysis of a Bluff Body Flame

    NASA Technical Reports Server (NTRS)

    Kojima, Jun; Fischer, David

    2013-01-01

    We report a high-speed laser diagnostics technique based on point-wise linear Raman spectroscopy for measuring the frequency content of a CH4-air premixed flame stabilized behind a circular bluff body. The technique, which primarily employs a Nd:YLF pulsed laser and a fast image-intensified CCD camera, successfully measures the time evolution of scalar parameters (N2, O2, CH4, and H2O) in the vortex-induced flame instability at a data rate of 1 kHz. Oscillation of the V-shaped flame front is quantified through frequency analysis of the combustion species data and their correlations. This technique promises to be a useful diagnostics tool for combustion instability studies.

  6. Discretized energy minimization in a wave guide with point sources

    NASA Technical Reports Server (NTRS)

    Propst, G.

    1994-01-01

    An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.

  7. A Priori Bound on the Velocity in Axially Symmetric Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Lei, Zhen; Navas, Esteban A.; Zhang, Qi S.

    2016-01-01

    Let v be the velocity of Leray-Hopf solutions to the axially symmetric three-dimensional Navier-Stokes equations. Under suitable conditions for initial values, we prove the following a priori bound |v(x, t)| ≤ C |ln r|^{1/2}/r^2, qquad 0 < r ≤ 1/2, where r is the distance from x to the z axis, and C is a constant depending only on the initial value. This provides a pointwise upper bound (worst case scenario) for possible singularities, while the recent papers (Chiun-Chuan et al., Commun PDE 34(1-3):203-232, 2009; Koch et al., Acta Math 203(1):83-105, 2009) gave a lower bound. The gap is polynomial order 1 modulo a half log term.

  8. Preliminary Computational Study for Future Tests in the NASA Ames 9 foot' x 7 foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Pearl, Jason M.; Carter, Melissa B.; Elmiligui, Alaa A.; WInski, Courtney S.; Nayani, Sudheer N.

    2016-01-01

    The NASA Advanced Air Vehicles Program, Commercial Supersonics Technology Project seeks to advance tools and techniques to make over-land supersonic flight feasible. In this study, preliminary computational results are presented for future tests in the NASA Ames 9 foot x 7 foot supersonic wind tunnel to be conducted in early 2016. Shock-plume interactions and their effect on pressure signature are examined for six model geometries. Near- field pressure signatures are assessed using the CFD code USM3D to model the proposed test geometries in free-air. Additionally, results obtained using the commercial grid generation software Pointwise Reigistered Trademark are compared to results using VGRID, the NASA Langley Research Center in-house mesh generation program.

  9. A projection method for coupling two-phase VOF and fluid structure interaction simulations

    NASA Astrophysics Data System (ADS)

    Cerroni, Daniele; Da Vià, Roberto; Manservisi, Sandro

    2018-02-01

    The study of Multiphase Fluid Structure Interaction (MFSI) is becoming of great interest in many engineering applications. In this work we propose a new algorithm for coupling a FSI problem to a multiphase interface advection problem. An unstructured computational grid and a Cartesian mesh are used for the FSI and the VOF problem, respectively. The coupling between these two different grids is obtained by interpolating the velocity field into the Cartesian grid through a projection operator that can take into account the natural movement of the FSI domain. The piecewise color function is interpolated back on the unstructured grid with a Galerkin interpolation to obtain a point-wise function which allows the direct computation of the surface tension forces.

  10. Quantitative Thermochemical Measurements in High-Pressure Gaseous Combustion

    NASA Technical Reports Server (NTRS)

    Kojima, Jun J.; Fischer, David G.

    2012-01-01

    We present our strategic experiment and thermochemical analyses on combustion flow using a subframe burst gating (SBG) Raman spectroscopy. This unconventional laser diagnostic technique has promising ability to enhance accuracy of the quantitative scalar measurements in a point-wise single-shot fashion. In the presentation, we briefly describe an experimental methodology that generates transferable calibration standard for the routine implementation of the diagnostics in hydrocarbon flames. The diagnostic technology was applied to simultaneous measurements of temperature and chemical species in a swirl-stabilized turbulent flame with gaseous methane fuel at elevated pressure (17 atm). Statistical analyses of the space-/time-resolved thermochemical data provide insights into the nature of the mixing process and it impact on the subsequent combustion process in the model combustor.

  11. Approximation of discrete-time LQG compensators for distributed systems with boundary input and unbounded measurement

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1987-01-01

    The approximation of optimal discrete-time linear quadratic Gaussian (LQG) compensators for distributed parameter control systems with boundary input and unbounded measurement is considered. The approach applies to a wide range of problems that can be formulated in a state space on which both the discrete-time input and output operators are continuous. Approximating compensators are obtained via application of the LQG theory and associated approximation results for infinite dimensional discrete-time control systems with bounded input and output. Numerical results for spline and modal based approximation schemes used to compute optimal compensators for a one dimensional heat equation with either Neumann or Dirichlet boundary control and pointwise measurement of temperature are presented and discussed.

  12. Palatini wormholes and energy conditions from the prism of general relativity.

    PubMed

    Bejarano, Cecilia; Lobo, Francisco S N; Olmo, Gonzalo J; Rubiera-Garcia, Diego

    2017-01-01

    Wormholes are hypothetical shortcuts in spacetime that in general relativity unavoidably violate all of the pointwise energy conditions. In this paper, we consider several wormhole spacetimes that, as opposed to the standard designer procedure frequently employed in the literature, arise directly from gravitational actions including additional terms resulting from contractions of the Ricci tensor with the metric, and which are formulated assuming independence between metric and connection (Palatini approach). We reinterpret such wormhole solutions under the prism of General Relativity and study the matter sources that thread them. We discuss the size of violation of the energy conditions in different cases and how this is related to the same spacetimes when viewed from the modified gravity side.

  13. An Alternate Set of Basis Functions for the Electromagnetic Solution of Arbitrarily-Shaped, Three-Dimensional, Closed, Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.

  14. A Methodology to Seperate and Analyze a Seismic Wide Angle Profile

    NASA Astrophysics Data System (ADS)

    Weinzierl, Wolfgang; Kopp, Heidrun

    2010-05-01

    General solutions of inverse problems can often be obtained through the introduction of probability distributions to sample the model space. We present a simple approach of defining an a priori space in a tomographic study and retrieve the velocity-depth posterior distribution by a Monte Carlo method. Utilizing a fitting routine designed for very low statistics to setup and analyze the obtained tomography results, it is possible to statistically separate the velocity-depth model space derived from the inversion of seismic refraction data. An example of a profile acquired in the Lesser Antilles subduction zone reveals the effectiveness of this approach. The resolution analysis of the structural heterogeneity includes a divergence analysis which proves to be capable of dissecting long wide-angle profiles for deep crust and upper mantle studies. The complete information of any parameterised physical system is contained in the a posteriori distribution. Methods for analyzing and displaying key properties of the a posteriori distributions of highly nonlinear inverse problems are therefore essential in the scope of any interpretation. From this study we infer several conclusions concerning the interpretation of the tomographic approach. By calculating a global as well as singular misfits of velocities we are able to map different geological units along a profile. Comparing velocity distributions with the result of a tomographic inversion along the profile we can mimic the subsurface structures in their extent and composition. The possibility of gaining a priori information for seismic refraction analysis by a simple solution to an inverse problem and subsequent resolution of structural heterogeneities through a divergence analysis is a new and simple way of defining a priori space and estimating the a posteriori mean and covariance in singular and general form. The major advantage of a Monte Carlo based approach in our case study is the obtained knowledge of velocity depth distributions. Certainly the decision of where to extract velocity information on the profile for setting up a Monte Carlo ensemble is limiting the a priori space. However, the general conclusion of analyzing the velocity field according to distinct reference distributions gives us the possibility to define the covariance according to any geological unit if we have a priori information on the velocity depth distributions. Using the wide angle data recorded across the Lesser Antilles arc, we are able to resolve a shallow feature like the backstop by a robust and simple divergence analysis. We demonstrate the effectiveness of the new methodology to extract some key features and properties from the inversion results by including information concerning the confidence level of results.

  15. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  16. Towards cheaper control centers

    NASA Technical Reports Server (NTRS)

    Baize, Lionel

    1994-01-01

    Today, any approach to the design of new space systems must take into consideration an important constraint, namely costs. This approach is our guideline for new missions and also applies to the ground segment, and particularly to the control center. CNES has carried out a study on a recent control center for application satellites in order to take advantage of the experience gained. This analysis, the purpose of which is to determine, a posteriori, the costs of architecture needs and choices, takes hardware and software costs into account and makes a number of recommendations.

  17. Deformable Image Registration for Cone-Beam CT Guided Transoral Robotic Base of Tongue Surgery

    PubMed Central

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-01-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base of tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam CT (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e., volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC), and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid, and Demons steps was 4.6, 2.1, and 1.7 mm, respectively. The respective ECC was 0.57, 0.70, and 0.73 and NPMI was 0.46, 0.57, and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base of tongue robotic surgery. PMID:23807549

  18. Simultaneous Measurements of Temperature and Major Species Concentration in a Hydrocarbon-Fueled Dual Mode Scramjet Using WIDECARS

    NASA Astrophysics Data System (ADS)

    Gallo, Emanuela Carolina Angela

    Width increased dual-pump enhanced coherent anti-Stokes Raman spectroscopy (WIDECARS) measurements were conducted in a McKenna air-ethylene premixed burner, at nominal equivalence ratio range between 0.55 and 2.50 to provide quantitative measurements of six major combustion species (C2H 4, N2, O2, H2, CO, CO2) concentration and temperature simultaneously. The purpose of this test was to investigate the uncertainties in the experimental and spectral modeling methods in preparation for an subsequent scramjet C2H4/air combustion test at the University of Virginia-Aerospace Research Laboratory. A broadband Pyrromethene (PM) PM597 and PM650 dye laser mixture and optical cavity were studied and optimized to excite the Raman shift of all the target species. Two hundred single shot recorded spectra were processed, theoretically fitted and then compared to computational models, to verify where chemical equilibrium or adiabatic condition occurred, providing experimental flame location and formation, species concentrations, temperature, and heat losses inputs to computational kinetic models. The Stark effect, temperature, and concentration errors are discussed. Subsequently, WIDECARS measurements of a premixed air-ethylene flame were successfully acquired in a direct connect small-scale dual-mode scramjet combustor, at University of Virginia Supersonic Combustion Facility (UVaSCF). A nominal Mach 5 flight condition was simulated (stagnation pressure p0 = 300 kPa, temperature T0 = 1200 K, equivalence ratio range ER = 0.3 -- 0.4). The purpose of this test was to provide quantitative measurements of the six major combustion species concentration and temperature. Point-wise measurements were taken by mapping four two-dimensional orthogonal planes (before, within, and two planes after the cavity flame holder) with respect to the combustor freestream direction. Two hundred single shot recorded spectra were processed and theoretically fitted. Mean flow and standard deviation are provided for each investigated case. Within the flame limits tested, WIDECARS data were analyzed and compared with CFD simulations and OH-PLIF measurements.

  19. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-07-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base-of-tongue robotic surgery.

  20. Modelling of turbulent lifted jet flames using flamelets: a priori assessment and a posteriori validation

    NASA Astrophysics Data System (ADS)

    Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver

    2014-03-01

    This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.

  1. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study

    PubMed Central

    Tielemans, Myrte J.; Erler, Nicole S.; Leermakers, Elisabeth T. M.; van den Broek, Marion; Jaddoe, Vincent W. V.; Steegers, Eric A. P.; Kiefte-de Jong, Jessica C.; Franco, Oscar H.

    2015-01-01

    Abnormal gestational weight gain (GWG) is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a “Vegetable, oil and fish”, a “Nuts, high-fiber cereals and soy”, and a “Margarine, sugar and snacks” pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the “Vegetable, oil and fish” pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69) for highest vs. lowest quartile (Q)). Adherence to the “Margarine, sugar and snacks” pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99) Q4 vs. Q1). Normal weight women with higher scores on the “Nuts, high-fiber cereals and soy” pattern had more moderate GWG than women with lower scores (−0.01 (95% CI −0.02; −0.00) per SD). The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG. PMID:26569303

  2. Dietary Patterns, Cognitive Decline, and Dementia: A Systematic Review12

    PubMed Central

    van de Rest, Ondine; Berendsen, Agnes AM; Haveman-Nies, Annemien; de Groot, Lisette CPGM

    2015-01-01

    Nutrition is an important modifiable risk factor that plays a role in the strategy to prevent or delay the onset of dementia. Research on nutritional effects has until now mainly focused on the role of individual nutrients and bioactive components. However, the evidence for combined effects, such as multinutrient approaches, or a healthy dietary pattern, such as the Mediterranean diet, is growing. These approaches incorporate the complexity of the diet and possible interaction and synergy between nutrients. Over the past few years, dietary patterns have increasingly been investigated to better understand the link between diet, cognitive decline, and dementia. In this systematic review we provide an overview of the literature on human studies up to May 2014 that examined the role of dietary patterns (derived both a priori as well as a posteriori) in relation to cognitive decline or dementia. The results suggest that better adherence to a Mediterranean diet is associated with less cognitive decline, dementia, or Alzheimer disease, as shown by 4 of 6 cross-sectional studies, 6 of 12 longitudinal studies, 1 trial, and 3 meta-analyses. Other healthy dietary patterns, derived both a priori (e.g., Healthy Diet Indicator, Healthy Eating Index, and Program National Nutrition Santé guideline score) and a posteriori (e.g., factor analysis, cluster analysis, and reduced rank regression), were shown to be associated with reduced cognitive decline and/or a reduced risk of dementia as shown by all 6 cross-sectional studies and 6 of 8 longitudinal studies. More conclusive evidence is needed to reach more targeted and detailed guidelines to prevent or postpone cognitive decline. PMID:25770254

  3. Gaps in content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potentially strong impact in diagnostics, research, and education. Research successes that are increasingly reported in the scientific literature, however, have not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed without sufficient analytical reasoning to the inability of these applications in overcoming the "semantic gap". The semantic gap divides the high-level scene analysis of humans from the low-level pixel analysis of computers. In this paper, we suggest a more systematic and comprehensive view on the concept of gaps in medical CBIR research. In particular, we define a total of 13 gaps that address the image content and features, as well as the system performance and usability. In addition to these gaps, we identify 6 system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application. To illustrate the a posteriori use of our conceptual system, we apply it, initially, to the classification of three medical CBIR implementations: the content-based PACS approach (cbPACS), the medical GNU image finding tool (medGIFT), and the image retrieval in medical applications (IRMA) project. We show that systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.

  4. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE

    PubMed Central

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379

  5. Nutrition and healthy ageing: the key ingredients.

    PubMed

    Kiefte-de Jong, Jessica C; Mathers, John C; Franco, Oscar H

    2014-05-01

    Healthy longevity is a tangible possibility for many individuals and populations, with nutritional and other lifestyle factors playing a key role in modulating the likelihood of healthy ageing. Nevertheless, studies of effects of nutrients or single foods on ageing often show inconsistent results and ignore the overall framework of dietary habits. Therefore, the use of dietary patterns (e.g. a Mediterranean dietary pattern) and the specific dietary recommendations (e.g. dietary approaches to stop hypertension, Polymeal and the American Healthy Eating Index) are becoming more widespread in promoting lifelong health. A posteriori defined dietary patterns are described frequently in relation to age-related diseases but their generalisability is often a challenge since these are developed specifically for the population under study. Conversely, the dietary guidelines are often developed based on prevention of disease or nutrient deficiency, but often less attention is paid to how well these dietary guidelines promote health outcomes. In the present paper, we provide an overview of the state of the art of dietary patterns and dietary recommendations in relation to life expectancy and the risk of age-related disorders (with emphasis on cardiometabolic diseases and cognitive outcomes). According to both a posteriori and a priori dietary patterns, some key 'ingredients' can be identified that are associated consistently with longevity and better cardiometabolic and cognitive health. These include high intake of fruit, vegetables, fish, (whole) grains and legumes/pulses and potatoes, whereas dietary patterns rich in red meat and sugar-rich foods have been associated with an increased risk of mortality and cardiometabolic outcomes.

  6. Incorporating priors on expert performance parameters for segmentation validation and label fusion: a maximum a posteriori STAPLE.

    PubMed

    Commowick, Olivier; Warfield, Simon K

    2010-01-01

    In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.

  7. The Impact of Prior Biosphere Models in the Inversion of Global Terrestrial CO2 Fluxes by Assimilating OCO-2 Retrievals

    NASA Technical Reports Server (NTRS)

    Philip, Sajeev; Johnson, Matthew S.

    2018-01-01

    Atmospheric mixing ratios of carbon dioxide (CO2) are largely controlled by anthropogenic emissions and biospheric fluxes. The processes controlling terrestrial biosphere-atmosphere carbon exchange are currently not fully understood, resulting in terrestrial biospheric models having significant differences in the quantification of biospheric CO2 fluxes. Atmospheric transport models assimilating measured (in situ or space-borne) CO2 concentrations to estimate "top-down" fluxes, generally use these biospheric CO2 fluxes as a priori information. Most of the flux inversion estimates result in substantially different spatio-temporal posteriori estimates of regional and global biospheric CO2 fluxes. The Orbiting Carbon Observatory 2 (OCO-2) satellite mission dedicated to accurately measure column CO2 (XCO2) allows for an improved understanding of global biospheric CO2 fluxes. OCO-2 provides much-needed CO2 observations in data-limited regions facilitating better global and regional estimates of "top-down" CO2 fluxes through inversion model simulations. The specific objectives of our research are to: 1) conduct GEOS-Chem 4D-Var assimilation of OCO-2 observations, using several state-of-the-science biospheric CO2 flux models as a priori information, to better constrain terrestrial CO2 fluxes, and 2) quantify the impact of different biospheric model prior fluxes on OCO-2-assimilated a posteriori CO2 flux estimates. Here we present our assessment of the importance of these a priori fluxes by conducting Observing System Simulation Experiments (OSSE) using simulated OCO-2 observations with known "true" fluxes.

  8. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2007-01-01

    In this work, we present a new set of basis functions, de ned over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also de ned over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

  9. New Basis Functions for the Electromagnetic Solution of Arbitrarily-shaped, Three Dimensional Conducting Bodies using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present a new set of basis functions, defined over a pair of planar triangular patches, for the solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped surfaces using the method of moments solution procedure. The basis functions are constant over the function subdomain and resemble pulse functions for one and two dimensional problems. Further, another set of basis functions, point-wise orthogonal to the first set, is also defined over the same function space. The primary objective of developing these basis functions is to utilize them for the electromagnetic solution involving conducting, dielectric, and composite bodies. However, in the present work, only the conducting body solution is presented and compared with other data.

  10. A criterion for delimiting active periods within turbulent flows

    NASA Astrophysics Data System (ADS)

    Keylock, C. J.

    2008-06-01

    Delimiting effectively the extent of the major motions in atmospheric, tidal and fluvial turbulent flows is an important task for studies of mixing and particle transport. The most common method for this (quadrant analysis) is closely linked to the turbulent stresses but subdivides active periods in the flow into separate events. A method based on the pointwise Hölder characteristics of the velocity data is introduced in this paper and applied to extract the whole duration of the active periods, within which turbulence intensities and stresses are high for some of the time. The cross-correlation structure of the Hölder series permits a simple threshold to be adopted. The technique is tested on data from a turbulent wake in a wind tunnel and flow over a forest canopy.

  11. Utilization of curve offsets in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Haseltalab, Vahid; Yaman, Ulas; Dolen, Melik

    2018-05-01

    Curve offsets are utilized in different fields of engineering and science. Additive manufacturing, which lately becomes an explicit requirement in manufacturing industry, utilizes curve offsets widely. One of the necessities of offsetting is for scaling which is required if there is shrinkage after the fabrication or if the surface quality of the resulting part is unacceptable. Therefore, some post-processing is indispensable. But the major application of curve offsets in additive manufacturing processes is for generating head trajectories. In a point-wise AM process, a correct tool-path in each layer can reduce lots of costs and increase the surface quality of the fabricated parts. In this study, different curve offset generation algorithms are analyzed to show their capabilities and disadvantages through some test cases and improvements on their drawbacks are suggested.

  12. New experimental results in atlas-based brain morphometry

    NASA Astrophysics Data System (ADS)

    Gee, James C.; Fabella, Brian A.; Fernandes, Siddharth E.; Turetsky, Bruce I.; Gur, Ruben C.; Gur, Raquel E.

    1999-05-01

    In a previous meeting, we described a computational approach to MRI morphometry, in which a spatial warp mapping a reference or atlas image into anatomic alignment with the subject is first inferred. Shape differences with respect to the atlas are then studied by calculating the pointwise Jacobian determinant for the warp, which provides a measure of the change in differential volume about a point in the reference as it transforms to its corresponding position in the subject. In this paper, the method is used to analyze sex differences in the shape and size of the corpus callosum in an ongoing study of a large population of normal controls. The preliminary results of the current analysis support findings in the literature that have observed the splenium to be larger in females than in males.

  13. Potential estimates for the p-Laplace system with data in divergence form

    NASA Astrophysics Data System (ADS)

    Cianchi, A.; Schwarzacher, S.

    2018-07-01

    A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.

  14. Magnetic field generation by pointwise zero-helicity three-dimensional steady flow of an incompressible electrically conducting fluid

    NASA Astrophysics Data System (ADS)

    Rasskazov, Andrey; Chertovskih, Roman; Zheligovsky, Vladislav

    2018-04-01

    We introduce six families of three-dimensional space-periodic steady solenoidal flows, whose kinetic helicity density is zero at any point. Four families are analytically defined. Flows in four families have zero helicity spectrum. Sample flows from five families are used to demonstrate numerically that neither zero kinetic helicity density nor zero helicity spectrum prohibit generation of large-scale magnetic field by the two most prominent dynamo mechanisms: the magnetic α -effect and negative eddy diffusivity. Our computations also attest that such flows often generate small-scale field for sufficiently small magnetic molecular diffusivity. These findings indicate that kinetic helicity and helicity spectrum are not the quantities controlling the dynamo properties of a flow regardless of whether scale separation is present or not.

  15. The match/mismatch of visuo-spatial cues between acquisition and retrieval contexts influences the expression of response vs. place memory in rats.

    PubMed

    Cassel, Raphaelle; Kelche, Christian; Lecourtier, Lucas; Cassel, Jean-Christophe

    2012-05-01

    Animals can perform goal-directed tasks by using response cues or place cues. The underlying memory systems are occasionally presented as competing. Using the double-H maze test (Pol-Bodetto et al.), we trained rats for response learning and, 24 h later, tested their memory in a 60-s probe trial using a new start place. A modest shift of the start place (translation: 60-cm to the left) provided a high misleading potential, whereas a marked shift (180° rotation; shift to the opposite) provided a low misleading potential. We analyzed each rat's first arm choice (to assess response vs. place memory retrieval) and its subsequent search for the former platform location (to assess the persistence in place memory or the shift from response to place memory). After the translation, response memory-based behavior was found in more than 90% rats (24/26). After the rotation, place memory-based behavior was observed in 50% rats, the others showing response memory or failing. Rats starting to use response cues were nevertheless able to subsequently shift to place ones. A posteriori behavioral analyses showed more and longer stops in rats starting their probe trial on the basis of place (vs. response) cues. These observations qualify the idea of competing memory systems for responses and places and are compatible with that of a cooperation between both systems according to principles of match/mismatch computation (at the start of a probe trial) and of error-driven adjustment (during the ongoing probe trial). Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Combining dynamic and ECG-gated ⁸²Rb-PET for practical implementation in the clinic.

    PubMed

    Sayre, George A; Bacharach, Stephen L; Dae, Michael W; Seo, Youngho

    2012-01-01

    For many cardiac clinics, list-mode PET is impractical. Therefore, separate dynamic and ECG-gated acquisitions are needed to detect harmful stenoses, indicate affected coronary arteries, and estimate stenosis severity. However, physicians usually order gated studies only because of dose, time, and cost limitations. These gated studies are limited to detection. In an effort to remove these limitations, we developed a novel curve-fitting algorithm [incomplete data (ICD)] to accurately calculate coronary flow reserve (CFR) from a combined dynamic-ECG protocol of a length equal to a typical gated scan. We selected several retrospective dynamic studies to simulate shortened dynamic acquisitions of the combined protocol and compared (a) the accuracy of ICD and a nominal method in extrapolating the complete functional form of arterial input functions (AIFs); and (b) the accuracy of ICD and ICD-AP (ICD with a-posteriori knowledge of complete-data AIFs) in predicting CFRs. According to the Akaike information criterion, AIFs predicted by ICD were more accurate than those predicted by the nominal method in 11 out of 12 studies. CFRs predicted by ICD and ICD-AP were similar to complete-data predictions (PICD=0.94 and PICD-AP=0.91) and had similar average errors (eICD=2.82% and eICD-AP=2.79%). According to a nuclear cardiologist and an expert analyst of PET data, both ICD and ICD-AP predicted CFR values with sufficient accuracy for the clinic. Therefore, by using our method, physicians in cardiac clinics would have access to the necessary amount of information to differentiate between single-vessel and triple-vessel disease for treatment decision making.

  17. THE DETECTION OF A SN IIn IN OPTICAL FOLLOW-UP OBSERVATIONS OF ICECUBE NEUTRINO EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.

    2015-09-20

    The IceCube neutrino observatory pursues a follow-up program selecting interesting neutrino events in real-time and issuing alerts for electromagnetic follow-up observations. In 2012 March, the most significant neutrino alert during the first three years of operation was issued by IceCube. In the follow-up observations performed by the Palomar Transient Factory (PTF), a Type IIn supernova (SN IIn) PTF12csy was found 0.°2 away from the neutrino alert direction, with an error radius of 0.°54. It has a redshift of z = 0.0684, corresponding to a luminosity distance of about 300 Mpc and the Pan-STARRS1 survey shows that its explosion time wasmore » at least 158 days (in host galaxy rest frame) before the neutrino alert, so that a causal connection is unlikely. The a posteriori significance of the chance detection of both the neutrinos and the SN at any epoch is 2.2σ within IceCube's 2011/12 data acquisition season. Also, a complementary neutrino analysis reveals no long-term signal over the course of one year. Therefore, we consider the SN detection coincidental and the neutrinos uncorrelated to the SN. However, the SN is unusual and interesting by itself: it is luminous and energetic, bearing strong resemblance to the SN IIn 2010jl, and shows signs of interaction of the SN ejecta with a dense circumstellar medium. High-energy neutrino emission is expected in models of diffusive shock acceleration, but at a low, non-detectable level for this specific SN. In this paper, we describe the SN PTF12csy and present both the neutrino and electromagnetic data, as well as their analysis.« less

  18. Comparison Between IASI/Metop-A and OMI/Aura Ozone Column Amounts with EUBREWNET Ground-Based Measurements

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto

    2016-07-01

    This work addresses the comparison of {bf IASI (Infrared Atmospheric Sounding Interferometer)} on board Metop-A and {bf OMI (Ozone Monitoring Instrument)} on board Aura to several ground-based Brewer spectrophotometers belonging to the {bf European Brewer Network (EUBREWNET)} for the period September 2010 to December 2015. The focus of this study is to examine how well the satellite retrieval products capture the total ozone column amounts (TOC) at different latitudes and evaluate the different levels of Brewer spectrophotometer data. On this comparison Level 1, 1.5 and 2 Brewer data will be used to evaluate satellite data, where: 1) Level 1 Brewer data are the TOC calculated with the standard Brewer algorithm from the direct sun measurements; 2) Level 1.5 Brewer data are Level 1.0 observations filtered and corrected from instrumental issues: and 3) Level 2.0 Brewer data are 1.5 observations, but validated with a posteriori calibration. The IASI retrievals examined are operational IASI Level 2 products, version 5 from September 2010 to October 2014, and version 6 from October 2014 to December 2015, from {it EUMETSAT Data Centre}, while OMI retrievals are OMI-DOAS TOC products extracted from the {it NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)}. The differences and their implications for the retrieved products will be discussed and, in order to evaluate the quality and sensitivity of each product, special attention will be put on analyzing the instrumental errors from these different measurement techniques. Furthermore, those parameters that could affect the comparison of the different datasets such as the different viewing geometry, the satellite data vertical sensitivity, cloudiness conditions, spectral region used for retrievals, and so on, will be analyzed in detail.

  19. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  20. New Opportunities for Remote Sensing Ionospheric Irregularities by Fitting Scintillation Spectra

    NASA Astrophysics Data System (ADS)

    Carrano, C. S.; Rino, C. L.; Groves, K. M.

    2017-12-01

    In a recent paper, we presented a phase screen theory for the spectrum of intensity scintillations when the refractive index irregularities follow a two-component power law [Carrano and Rino, DOI: 10.1002/2015RS005903]. More recently we have investigated the inverse problem, whereby phase screen parameters are inferred from scintillation time series. This is accomplished by fitting the spectrum of intensity fluctuations with a parametrized theoretical model using Maximum Likelihood (ML) methods. The Markov-Chain Monte-Carlo technique provides a-posteriori errors and confidence intervals. The Akaike Information Criterion (AIC) provides justification for the use of one- or two-component irregularity models. We refer to this fitting as Irregularity Parameter Estimation (IPE) since it provides a statistical description of the irregularities from the scintillations they produce. In this talk, we explore some new opportunities for remote sensing ionospheric irregularities afforded by IPE. Statistical characterization of irregularities and the plasma bubbles in which they are embedded provides insight into the development of the underlying instability. In a companion paper by Rino et al., IPE is used to interpret scintillation due to simulated EPB structure. IPE can be used to reconcile multi-frequency scintillation observations and to construct high fidelity scintillation simulation tools. In space-to-ground propagation scenarios, for which an estimate of the distance to the scattering region is available a-priori, IPE enables retrieval of zonal irregularity drift. In radio occultation scenarios, the distance to the irregularities is generally unknown but IPE enables retrieval of Fresnel frequency. A geometric model for the effective scan velocity maps Fresnel frequency to Fresnel scale, yielding the distance to the irregularities. We demonstrate this approach by geolocating irregularities observed by the CORISS instrument onboard the C/NOFS satellite.

Top